Will the open source revolution in decentralized AI create peace under nuclear deterrence?

24-07-24 16:55
Read this article in 120 Minutes
总结 AI summary
View the summary 收起
Translator's note: In The Square and the Tower, author Neil Ferguson explores in detail how power has shifted throughout history between centralized power structures (the tower) and decentralized networks (the square). By analyzing historical events, Ferguson reveals the important role of networks in political, economic, and social change, and emphasizes the growth of network power in modern society.


With Ferguson's views as the background, this episode of the Delphi podcast has a lively conversation around the two technological development paths of centralized and decentralized AI. The four guests first analyzed the importance of trust and privacy in AI adoption, and carefully distinguished between different types of AI and the incentive structures behind them. The discussion further touched on the issue of consumers' choice between convenience and privacy protection, as well as the vulnerabilities and security risks brought about by centralized power.


One of the core issues discussed was how to build a decentralized AI competitor, especially under the current highly centralized technology paradigm. The guests analyzed existing scale models and their limitations, and discussed the potential of open source solutions. They believe that with national security considerations and the need for economic value capture, the importance of distributed approaches is becoming increasingly prominent.


In addition, the podcast discusses the role of economic value capture and distribution in AI business models, and the moat issues of large companies and small companies in distribution channels. By comparing the different business models of OpenAI and Meta, the guests elaborated on the disruptive potential of decentralized AI and cryptocurrency, especially the advantages in trust and deterministic execution. Finally, they looked forward to the financialization potential of AI on blockchain and pointed out that the widespread application of AI Agent will become a catalyst for exponential growth in this field.


The article is long, it is recommended to collect it.


TL;DR


· Delphi researcher Michael Rinko believes that capitalism provides a good incentive mechanism for technological development. Private companies are rewarded handsomely by developing products that are useful and safe for people. Therefore, leaving AI in the hands of the private sector is not as bad as people think. Competitors in the private sector will compete with each other to become the most useful technology providers, and ultimately provide these technologies to consumers in the safest way. On the contrary, if this technology is left completely open, some people will use these technologies for malicious activities, and there is no effective way to stop them. The best way is to provide some kind of controlled access, which is exactly what the private sector can do.


· Gensyn co-founder Ben Fielding believes that the premise for capitalism to work is auditability. Without an audit mechanism, companies may do things that are harmful to the world in the pursuit of profits. The open source model can achieve auditability without affecting the company's ability to develop models. In addition, when discussing the dangers of AI, we should not involve what it may theoretically do in the future, because any possibility exists.


· Ambient co-founder Travis Good believes that large companies will have to turn to a more distributed AI development paradigm to achieve scale. The current paradigm is costly and difficult to achieve super-large scale. They must make a transition at a certain node, and this transition is very expensive for them. The expansion of the model itself will not stop, but our ability to successfully manage and deploy these models will begin to decline before the performance of the model declines.


· Delphi researcher Pondering Durian believes that there is currently a reflexive cycle between the capital market and large companies. Large companies have obtained a lot of cheap funds, so they have a lot of firepower to pursue the law of scale. But the question is, do the big companies have enough revenue to justify continued investment? If not, people will stop funding them, and Google, Amazon, and Facebook will all be hit hard, and they won't be able to build those 100 billion clusters.


· Ben Fielding believes that the real moat of big model companies is the actual distribution capability. OpenAI will try to provide software that uses intelligence for distribution, but soon, all companies with distribution capabilities will want to break away from OpenAI. Meta recognizes that their moat is their ability to distribute these models to users. They are able to apply these models to actual real-world situations, not just trying to make money from the models themselves. So they drive open source models, not out of altruism, but for actual strategic gains.


· Michael Rinko believes that there are three problems that encryption can solve for AI: First, encryption is trustless. In encryption you don't have to trust anyone, which is an attractive feature for AI. Second, encryption is deterministic. The real world is random and full of uncertainty, but AI doesn't like this uncertainty. It prefers deterministic execution, and encryption provides this. Third, AI can achieve super capitalism through encryption. Encryption can financialize anything, and AI can use this to accumulate resources.



Tommy: I'm Tommy, founding partner at Delphi Ventures, and I'm excited to host this discussion today on crypto AI vs. centralized AI. I've got four of the top experts in the industry here to introduce themselves. First up is Ben Fielding, co-founder of Gensyn. Gensyn is committed to pushing the boundaries of machine learning through decentralized computing. We also have Travis Good, co-founder of Ambient, who focuses on world-class open source models and is also a portfolio company of ours. These two represent the crypto AI camp.


On the Delphi side, we have Michael Runco, senior market analyst, who wrote Delphi's first AI report, The Real Merge, in February 2024. And the anonymous analyst PD, who wrote a great report on centralized and decentralized AI, The Tower & the Square. This is also the topic we are discussing today.


Let's do a brief self-introduction in this order, Ben will start first.


Ben Fielding:Thank you for the invitation. I am the co-founder of Gensyn, which is a machine learning computing protocol. You can think of us as a protocol layer, a bit like the early network protocols, that can cover hardware with machine learning capabilities and allow it to be used for any machine learning training task in the world. Our idea is that you are no longer limited to using resources like AWS or Google Cloud to train models, but can use any device in the world. You can send the model directly to a single device, or select a part of the devices to form a computing cluster of any size. This turns machine learning computing into a commodity like electricity, rather than a GPU that must be rented and reserved from someone. It becomes a kind of always-available marketplace.


Tommy: That's helpful. Travis, your turn.


Travis Good:Okay, I'm Travis, co-founder of Ambient. I have a PhD in IT and have been focusing on AI-related industry applications for the past decade, including projects related to drug discovery and critical infrastructure.


Michael Rinko:Hi everyone, I'm Michael, an analyst at Delphi, focusing mainly on market analysis, but also covering other areas. As Tommy mentioned, I wrote a report on AI a few months ago, and I'm very happy to discuss this topic with you today. PD, please introduce yourself.


Pondering Durian:Hi everyone, I'm Pondering Durian, a member of the Delphi research team, mainly responsible for research and investment in the consumer Internet, enterprise software and encryption fields. I'm very happy to participate in this discussion and look forward to communicating with you.


Centralized vs. Decentralized AI


Tommy:PD, can you tell me what your report is about? This is where we’ll start today.


Pondering Durian:Okay. The report is titled The Tower & the Square, a nod to Stanford historian Niall Ferguson’s book, where he explores the dynamics between hierarchies and networks. I think the last 30 years have been the story of the rise of networks, from globalization, liberalization, capitalism to the internet, cryptocurrencies, and social media. However, in the last five years, traditional hierarchies have re-emerged, with states and corporations back in the driver’s seat. And the emergence of AI does seem to be a pretty big centralizing force in its current form. So the real question before us is, is the future a world of multi-trillion parameter models controlled by a few West Coast giants, or a world of decentralized models of all shapes and sizes? There are good arguments for both views, and that’s what we’re going to discuss today.



Tommy:Ben or Travis, who will start? As the more open source camp, what is your point of view?


Travis Good:Okay, let me repeat PD's point to see if I understand it correctly. You mentioned that we have seen a large-scale consolidation in this field and vertical expansion, which has brought a clear advantage to closed source AI. I generally agree with this judgment.


Then you raised two possibilities: are we facing a trillion-parameter base model launched by the closed source camp, or will there be a large number of different models? I would like to add that there is a third possibility: the open source camp can also develop its own trillion-parameter model and compete with the closed source model in terms of capabilities.


Can I summarize your point in this way?


Pondering Durian:Yes, you made a good point. We are currently seeing vertically integrated products leading the way in capabilities, but these models have not proven themselves to be as sticky as they were in the Web2 era. So it seems that closed source players have the upper hand at the moment, but it remains to be seen what the future holds. I think there is a viable solution for decentralized AI and the open source community to provide alternatives at scale and at all levels of the entire technology stack.


Travis Good:I totally agree. I think at this time, it may be necessary to distinguish between what is ideal and what is possible. I would like to think about this with you all on the desirability of closed source AI, and you are welcome to chime in at any time. Many people may have heard this view, we are all "cyborgs". Our lives are deeply involved in computers, constantly getting information from various devices. We are already "augmented humans". I think AI or AGI will become our future "coprocessor", and we will constantly interact with it to improve ourselves. In the future, it may even be directly integrated with our brains.


Then the question is: do you want this "coprocessor" to be trustworthy? I find it hard to believe that these vertically integrated "surveillance capitalists" can offer a credible solution. On a personal level, it seems extremely dangerous to allow these companies who have abused their users and customers to control our minds. We did not accept the Clipper chip that the NSA attempted to implant a backdoor in all our encrypted communications, and we should not accept closed source companies trying to get between us and the mind coprocessor that helps us think.


This is my personal view. Before I talk about the societal level, let me stop and listen to your thoughts.


Ben Fielding:I totally agree with this view of the future. I think machine learning and AI as an augmentation of human capabilities are the most obvious development direction.


In fact, we are already experiencing this technological progress. For example, look at how we use the Internet and the generation that has grown up in an environment of shared knowledge bases. Although I don't have specific references, there are studies that show that the way we remember information has changed. Now we remember more about how to get information through tools such as Google than the information itself. I think this trend will continue, and humans will learn around this new tool and eventually integrate with it completely. I agree that this tool should not be controlled by a single entity. We don't want this entity to have the ability to censor the content of our brains.


I think from the perspective of the incentive system, these companies are not doing anything malicious. They are just pursuing profits, which is what we expect from companies and the original intention of the system. But what we need to think about now is how to preserve this incentive mechanism while making small adjustments to avoid too much power being concentrated in one place. We have seen the negative effects of monopoly in history, especially in certain fields, and this effect is very serious. And in the field of AI, this impact may be further amplified because it directly affects people's minds. You can say that this is already happening with social media, and AI will be a deeper level of this impact. So I completely agree with what you said, and this is also how we think about the future world.


Michael Rinko: I want to bring up some points from another perspective and play the role of "opposition debater".


I think one of the challenges about this topic is what type of AI are you talking about? Are we talking about the AI of today, such as chatbots or ChatGPT, or general artificial intelligence (AGI) that may appear in the next few years, or artificial superintelligence that can rule the world and colonize the galaxy?


The trade-offs and incentives faced by different types of AI are different. I think the easiest to understand is the situation we face now: how to build a world as safe as possible with current technology. There is no doubt that closed source AI is probably the safest way for us to manage this technology today, and it will be the same for the foreseeable future. I will list a few points and hope to hear your reactions. I think when you mentioned incentives, that got me thinking a little bit.


I actually think that capitalism and our current economic system provide very good incentives. Companies are well rewarded for building products that are useful and safe for people. If you build products that are not useful enough or that cause harm to people, you don't make a lot of money. In other words, fundamentally, the current incentives seem to be working. We have a lot of wealth created every year, inequality has decreased, and these indicators generally show that the world is getting better, and I think a lot of this is due to capitalism, which has this reward system that encourages people to solve problems in a safe way.


I don't see how AI will invalidate capitalism. So I think leaving this technology to the private sector is not as bad as people think it is, because it's just trusting capitalism to work. Competitors in the private sector will compete with each other to be the most useful technology providers, and ultimately provide these technologies to consumers in the most secure way. If you believe in this basic principle, then generally speaking, the best solutions that are provided in the most secure way will emerge, and that's a good future.


In contrast, if this technology is left completely open, anyone with a computer and an internet connection can create and disseminate these AIs. Then, although it may not be now, at some point in the future, these AIs may cause significant damage and harm. There are people in the world who will use these technologies for malicious activities, and we currently have no effective way to stop them. I think the best way may be some kind of controlled access, which is exactly what the private sector can provide.


Consumer Preferences and Security Issues in AI Adoption


Pondering Durian: Can I interject? I want to add to that because I think the argument about capitalism and security is a good one, but I also want to mention that over time, each generation of consumers has essentially chosen convenience over an ideal privacy solution. So, really, I'm more concerned about the next three to five years where every consumer has an Apple iPhone in their pocket, everybody's using Google, they have this wonderful Google suite that's going to be integrated with AI. It's a little bit too idealistic to think that these co-processors operated by big companies won't automatically be adopted by consumers in the simplest way and gradually become the default solution.


These products will be very good, as Michael said. So you could be sliding down a dangerous slope where users will tend to choose these products because of these natural advantages, even if they are not ideal from a social perspective. That's the point I wanted to add.


Ben Fielding: Yeah, I agree with that trend, especially for users, that ideological reasons alone to drive adoption of a product don't work well at scale, it might only attract a small number of users, but it won't spread widely. I learned this lesson very explicitly and the hard way with my last startup which was focused on consumer privacy.


But getting back to Michael's point, when you make the dangerousness of the alternative, your argument is based on current reality but quickly jumps to what could happen in the future. I think that whenever we discuss the dangers of AI, it always involves what it could theoretically do in the future, and we enter an infinite space of possibilities. If we consider the dangers of what this system can actually do now, rather than hypothetical capabilities in the future, such a discussion will make more sense. Otherwise, we lose the ability to refute because any possibility exists.


Regarding the problem of capitalism addressing these negative effects, I agree that the premise is that there must be auditability. If there is no auditability, if a company can do something harmful to the world in the pursuit of profit, and this behavior never affects its profits, I think as an economically rational actor, this company may do this. Therefore, we must introduce some kind of audit mechanism into the system to discover these problems.


You can solve this problem from the perspective of government regulation, or you can take another way, which is to open up the development of certain technologies. I think the latter is better for the world because we can achieve auditability without affecting the company's ability to develop models. In particular, this is particularly important when the value of the model is not entirely reflected in its architecture, but elsewhere. At present, we are still exploring the true value of machine learning models. I personally believe that the value lies in the ability to distribute, but in fact we have been exploring and adjusting on this issue. I think the architecture of the model itself can be open, but there may be proprietary value in other areas such as data and applications.


Travis Good: I want to interject and add to your point, while also expressing a slightly different view. I'm taking some notes, Michael, you started by talking about incentives and how they work well. I think the average consumer would probably disagree with you.


Cory Doctorow talked about the "Great Decline" of the Internet, and anyone who has used Google knows that it's become a terrible user experience compared to what it used to be. It's also a terrible advertising experience for businesses. The decline happened when disintermediating players, like Google, started to take too much of the pie. We learned from the lawsuit against Google that it actually rigged the rules of the game to make advertising prices higher and provide users with an experience that made it difficult to find useful results. And they were not actually punished for this. We need to dig deeper into why, maybe it's because of improper regulation or lack of regulation of corporate behavior, maybe it's because of inertia, but ultimately what we get is a situation that is far from the optimal state of capitalism.



We actually got what's called an "oligopolistic optimality" where surveillance capitalists are handsomely rewarded and pretty much every other type of business suffers. For example, I don't think the news media is happy with how Facebook has handled things over the years. So, I think to understand what's going to happen in the future, just look at the past. Look at the rise in teenage depression, for example, and the manipulation of Facebook's algorithm, all of these abuses.


Michael Rinko: I think you make some great points, Travis, and I'll echo some of them. We may just have different perspectives, but I'm very optimistic about the development of technology. I think that throughout human history, the advancement of technology has generally improved people's lives. If you took someone from a hundred years ago and brought them to today, they would find that being able to instantly send text messages, video call, order a car to their door, order food to be delivered to their door is pure magic.


So, while side effects such as teenage depression do exist, these problems are only a small part of the huge benefits that technology has brought. AI will not be an exception. Also, in terms of possible future dangers, I admit that this is indeed a problem. AI is particularly difficult to understand, and even the top labs cannot fully understand why the model does certain things.


And if you look at historical technologies that are dual-use or dangerous, such as the atomic bomb, you can tell how dangerous it is by testing it. If it explodes, it is obviously dangerous and should not be released; if it doesn't explode, then it doesn't work and is not dangerous. With these models, we can’t clearly define the scope of their effects, so the question of whether they should be released is very vague. This makes it difficult for regulators or top experts to set safety measures. I’d like to know what you think about this.


Travis Good: Ben, do you want to respond first?


Ben Fielding: Just to respond briefly, you mentioned optimism, but we don’t know how these technologies will present themselves in the future, so assume that they will bring negative situations. Of course, the optimistic position is that they currently have no negative effects. We don’t know if there will be bad results in the future, but optimistically, it should be fine. However, when it comes to the models themselves and their future performance, the optimism becomes pessimism.


Tommy: PD, I’d like to hear your views as well.


Pondering Durian: Yes, returning to Travis’ point, it is undeniable that market concentration has increased dramatically over the past few decades. Consumers need some form of content curation because it's hard to find the information they need. This is actually what Ben Thompson has been arguing for nearly two decades, that if you aggregate demand online, supply will follow. So there's a network effect around these walled gardens because they're so curated.


These companies, through their privileged position, have formed data monopolies and cash flow monopolies, giving them enormous market power that has become increasingly exploitative over time rather than in the early growth phase of the network. Since AI is more of a sustaining technology than a disruptive technology, it actually further consolidates these market powers unless we have a different solution.


To become the dominant foundational model, the key factors now are talent, data distribution, and capital. These entities are very well positioned in the new era of AI in terms of the network effects and users generated by the dominant Web2 era. Given the high cost of computing and the proprietary data sets they have, from my perspective, while this may not be the most ideal social state, it is the reality of our current system.


I don't think capitalism is bad, but now four or five companies do have a lot of market power. Looking at the S&P 500, five companies account for 30% of the market and it's still increasing, which is indeed a problem. In the next generation of the Internet, the trend towards decentralization may not appear, but will instead be further centralized. So that's something to watch because there are really structural headwinds in the system that we've created that push a few companies to have a lot of market power.


Travis Good: Security is often mentioned, and I don't want to jump too far into the future, but I do want to point out some issues with the threat model as it's currently configured. Right now we have four companies holding all the data, and in the future they're going to have even more AI capabilities. I think that's a very bad threat model.


First of all, these companies have been hacked. For example, Microsoft has failed to protect critical national security data in multiple audits and even lost hardware keys that should have been unstolen. The ability to centralize all the data and these models makes these companies the most attractive targets for nation states, independent hacker groups, etc. This situation could improve if these powers were more decentralized and we had credible open source competitors to perform certain functions.


Michael Rinko: Travis, can I chime in?


Travis Good: Of course.


Michael Rinko: You mentioned the oligopolies in the current state of technology, how will data concentration be different in the future world of AI technology? If these oligopolies continue to exist, what progress will there be?


Travis Good: Let me be specific. ChatGPT is now integrated into a variety of applications, such as Notion, Evernote, Spreadsheets, etc. Its ability to act on behalf of users in many programs means that the amount of data accumulated is staggering. Combined with the data obtained through surveillance capitalism, all the signals collected from mobile phones, website cookies, etc., a detailed personal intelligence profile is produced. This concentration of data creates a huge national security threat, such as the hack of the US government's background check provider a few years ago, which led to the theft of security lists.


We have heard from many insiders that the security of these companies is very poor, and the government's security is also very poor. If we think this is a healthy paradigm, then we are in a very vulnerable situation.


Tommy: I feel that there are two views at present, one side is ideological, such as encrypted AI is more focused on consumer preferences, privacy, and future benefits. And the other side is realistic, such as centralized AI has the most data, hardware, and talent, and a more intuitive experience. Can Ben or Travis describe how to build an actual competitive alternative?


The Advantages and Potential of Decentralized AI


Travis Good: I think this is a great question that touches on what is ideal and what is possible. So I'd be happy to try to answer it first and then hand it over to you, Ben. In terms of possibilities, I think it's important to note that machine learning (ML) is not a strictly scientific field, and virtually anyone can contribute to it, and there have been a lot of contributions that are relatively simple modifications of previous paradigms. So, the barriers to entry for ML are relatively low.


This bodes well for future open source capabilities, because as computing becomes cheaper, people will be able to experiment and innovate at a much larger scale. I think the talent around the world will mobilize in this area and devote themselves to this cause. I also think that many governments will realize that they want to remain competitive in the new economic paradigm, so they will fund a lot of research in this area. Research institutions around the world will also focus on building capabilities in this area.


We're not quite there yet, we have some models that are roughly close to previous generations, but I think the future is very bright because the talent from all over the world will not just be concentrated in San Francisco, but will be applied to solving this challenge that has huge economic rewards.


Tommy: Yeah, that's very helpful. Ben, I wanted to get your take on that, do you have anything to add to Travis's thoughts?


Ben Fielding: I think the two viewpoints we're discussing are a little blurry because both sides are basically saying that what we want is a high level of normal capitalist behavior and competition. But the way the "tower model" is going right now is that the incentives for large model builders are to prevent that competition from happening. They don't necessarily want that level of competition, and the resources underlying ML can be largely controlled to prevent too much competition from happening.


But as Travis said, ML itself is not hard. The expert knowledge required has been decentralized, academic publications have made that happen, you can get these papers, at least until OpenAI started centralizing researchers to build models, you can get the code to implement these models. But then you run into the bottleneck of data training models, which is also largely open because there is a lot of open data on the Internet, and although we may run into copyright issues, as an individual you still have access to a lot of data to train these models.


Then there are the limitations of computing resources, which are easier to control. If you have unlimited resources, you can take a large part of this limited resource and lock it up. These model builders also have the largest cloud service ecosystem, which means they are able to control the supply and cost of using resources.


If we look to the future, ML will become something that relies on electricity, and ultimately electricity converted to knowledge conversion. You need specific hardware to do this, so hardware resources can be captured. If this is allowed to continue to develop, large companies will control profits by capturing the underlying resources, which is not the result we want.


If it is the actual competition for the value delivered to the end user, then making it as open as possible is good. But this is not completely open competition, but the capture and rental of resources, which is not a good result of capitalism. What we want is actual value creation.


Ultimately, the model will be weakened by use, which means that the model itself cannot maintain its competitive advantage in the long run. We need to find other ways to stay competitive, such as by building an ecosystem, allowing more people to participate in it, and actually competing for the interface that provides value, rather than the capture of resources. In this way, the real value of ML lies at the user level, providing things that actually improve life, and driving the continuous development of technology through front-end competition. In this way, there can be a completely open ecosystem behind it, providing richer competition and better results.


Travis Good: First, I think what Ben said was excellent and he expressed many of the points more clearly than I could. Second, I want to comment on the current possibilities from a historical perspective. The massive growth we've seen in AI computing data centers is a bit like an unexpected gold rush. If ChatGPT had never been released, we might not have seen this focus on model training in a particular paradigm.


Now that this has become the paradigm, the big players are locked into this model. It's an easy model for them to enter because the capital input and output are relatively clear. The training methods and secret recipes are relatively simple, and the question is what capitalist incentives can work against this. I think forms of distributed training or efficient training that can leverage long-tail computing resources will emerge because they have huge economic value.


As long as we can prevent the worst-case scenario that Ben mentioned, such as locking up all computing resources through regulation, we can expect open source, consumer-friendly distributed training solutions to emerge. Recently, we have seen some great progress, such as training techniques that do not require matrix multiplication, which significantly reduces the need for data sharing. There are many other papers that are working around the bandwidth limitations of the current paradigm. Although we are not quite there yet, I expect that within a year, we will see three powerful alternatives to the big data center paradigm, which will add some momentum to the open source efforts.


AGI Debate


Michael Rinko: I think you all make very compelling points. For me, the big question right now is, does the law of scale hold? Are you bullish or bearish on the law of scale? Let me explain both cases. If you are bullish on the law of scale, then you accumulate as much computing resources as possible and hope to get more and more powerful models by scaling these models. If you are bearish on the law of scale, then you think that at some point the relationship between compute and loss will break or at least level off. You may think that these models can be simplified, or you may think that the current deep learning paradigm is wrong and AGI (artificial general intelligence) will come from a completely different approach than the current model.


I don't have a very clear view, but it seems to me that there are two very different camps. OpenAI and Anthropic and Google are strong supporters of the law of scale, and since the GPT-3 paper was published, these companies have been accumulating as much computing resources as possible, believing that scale is the key to AGI. On the other side, including many people in the field of crypto AI and top researchers at Meta, they believe that LLMs (large language models) will not lead to AGI and we need new approaches. What do you think about this issue? Because if it is indeed scale and computing resources that determine everything, then I don't know how to beat those big companies. So we all seem to be secretly betting that scaling models alone will not solve the problem because then we enter a resource race that we can't win.


Ben Fielding: I don't think the scale problem points to a centralized approach. I think there's a different twist here. You think about scale, what does that mean? Can I create a model on a certain number of devices, can I create more devices and create models on them? We're currently in a brute force scaling paradigm, where we throw compute resources at the most centralized, easiest way to model, and we get quick wins and show what's possible. That's like the cutting edge of machine learning, where the world realizes that something like ChatGPT is actually possible. But what happens next? Once you've proven that, there's a wave of improvements. It's no longer a research problem, it's an execution, engineering, performance optimization problem. So there's a huge wave of commoditizing these models, shrinking them, quantizing them, compressing them, sparsifying them, making them faster and easier to run.


You need to have actual proof that it's possible, and you need a lot of compute resources to make that possible. But then all the products can be built in a different way, in a smaller way. The brute force approach itself has started to hit a wall.


Michael Rinko: Why is it hitting a wall?


Ben Fielding: Data center space and enough interconnected devices to scale is becoming increasingly difficult. I used to think it was a funding issue, you needed more and more money to do this. I mentioned this to an executive at Meta once, and he laughed and said, basically we have unlimited money, it's not a funding issue, it's a geographic issue. There aren't that many places on the planet to build data centers that can sustain this scale of interconnected devices, which has led the hyperscale data centers to go the route of horizontal expansion.


Now you see a wave of papers saying, hey, actually we can do multi-node, multi-data center modeling. Just think about what this means? Google can do this modeling on three data centers, but what if someone can do it on four, five data centers? Ultimately, the limit of this kind of scaling is to connect all the devices in the world, and that requires cross-company cooperation. So it's impossible for a single company to win by competing with all the other companies. If you can build a layer that allows this kind of connectivity to happen, that will be the ultimate scaling win.


Michael Rinko: I see. I want to make sure I understand this because it's an important question. Are you saying that we can't make data centers bigger than they are now, or that we can't build more data centers in general because of power shortages or whatever?


Ben Fielding: It's not that you can't build more data centers, but there are diminishing returns. It's getting harder and harder to find locations that can support the power and cooling requirements, meet local noise regulations, and house enough equipment. This leads to the hyperscale companies competing for the remaining suitable locations on the planet. At some point, there is a logical end to this.


If you can incentivize people to build these data centers all over the world, rather than just in a single location, then you can connect these devices together to create a larger cluster. This will have some negative effects, such as slower communication between these more widely distributed small clusters, but this is not an insurmountable problem, but it is a problem that we can solve through engineering means. We have a rich history of the Internet to overcome these problems, such as latency issues and distributed systems working.


Travis Good: I think there are two answers to this question, Ben provides one of them, and I have a point that you may disagree with. I think one answer is that large companies will have to move to a more distributed paradigm to achieve scale. It will slow them down because they are in the wrong paradigm right now. The current paradigm is expensive and hard to scale, and they have to make a transition at some point, and that transition is very expensive for them.


Another question is, even if we solve this problem and can scale infinitely, will the model power reach a plateau? This is a question about the relationship between model power and resources. I think this question is about the alignment of models. Take safety-critical applications as an example. The gold standard is formal verification of the system, that is, you have to mathematically prove that the system behaves under certain conditions. This is very difficult in large language models because every interaction with the model creates a functionally different path.


I don't think the scaling itself will stop, but our ability to successfully manage and deploy these models will start to degrade before the model performance degrades. I think we have reached this point.


Michael Rinko: We don't understand why the model outputs certain results now.


Ben Fielding: We have this problem in technology, we are moving into a probabilistic world. Self-driving cars are a good example, we know they are statistically safer than humans driving. But because they are probabilistic, we have social barriers to accepting that they will make safe choices. We need to move from formal logic to probabilistic logic, which is good for humanity overall.


Travis Good: I think that's a very reasonable point, but I would add that accepting these claims, if AGI could do extremely dangerous things, a mistake could have catastrophic consequences for civilization. The question is, if I want to deploy a system and there's a 30% chance that because I have this infinite API attack surface, that the system will do catastrophically bad things for civilization, can I deploy it according to your model? My answer is absolutely no, and then the whole development curve will be stalled for a while before everyone tries to do basic research to figure out how to achieve super alignment. This is my mental model.


Ben Fielding: It requires the assumption that some huge negative outcome could happen. There are no actual examples of what the model is capable of doing now. The examples are always what if the model could do something in the future. It's always a possibility, which is the characteristic of probabilistic models. We need to accept the fact that there could be very high-impact, but very low-probability negative outcomes, and as humans, we are inherently dealing with probabilities when we interact with the world. We need to move from more imperative techniques to probabilistic techniques.


Pondering Durian: I feel like this is going to get into AGI and hyper-alignment really quickly. So, back to the closed vs. open question.


Tommy: Thanks for getting that back on track.


Pondering Durian: My point is that there is a reflexive cycle going on right now between the capital markets and the hyperscale companies. They have access to a lot of cheap money, and obviously they have a $400 billion balance sheet, generating 1.5% of GDP in cash flow, and are expected to spend $1 trillion on CapEx. So they do have a lot of firepower to pursue what Michael was talking about as the law of scale. The big question is, is there enough revenue to justify the continued investment? The maximum value capture from each generation of the model, and can they get the next $100 billion in revenue from enterprises and startups because those companies use and rely on their models. If they can't, people will stop funding them and Google, Amazon and Facebook will all get hit hard and they won't be able to build those 100 billion clusters.


I think the financial constraints are the key issue here. So the question is whether the capital expenditures for these centralized models can be paid off. It gets into what the ultimate moat is for the large base models and how quickly these distributed systems can catch up or how quickly these distributed training runs can catch up and how much value capture there is at the very high end versus the parts that are commoditized. I'd love to hear Ben and Travis' take on this because to me it seems like the algorithms we're dealing with are about how far the returns on these large upfront investments can take them and it gets into the law of scale.


Economic Value Capture and Distribution


Travis Good: I can answer that really quickly. I think it's a snowball model. I've expressed my view to most of you that I think AI is an economic replacement technology. Basically, the more powerful the AI models are, the more economic jobs they replace, the more valuable they are. Going back to PD's point, the question is whether there will be a moment where people think this snowball is not going to keep rolling and people get timid and don't want to invest more capital expenditures because they think they've reached a point of diminishing returns. I actually think that moment might be when we have real alignment problems because we don't actually have a good theory of machine learning to solve these problems.


And then I want to comment on the open and closed source thing. I actually think the economic value of AI would be higher in a world where security research is public. Because that reduces the risk that anyone is unaware of the status of security research and that countries and institutions don't have the right protections in place. It's in everyone's interest to have that information as widely disseminated as possible.


Michael Rinko: So can we compare what you just said to the nuclear bomb race? Would you have wished that the basic research at the time was open source and widely disseminated? There was a view, even among some top scientists, that if everyone knew the latest state of research, then it would be like a mutually assured destruction situation, where we would all have the same stuff and would all point our guns at each other at the same time, and that would lead to the safest world. And the other side thought that we need to get there first, we need to get to the end first and then point the big guns at each other so that no matter what the other side does, we can win. I actually think that's a good analogy, it may not be perfect, but it's a good comparison in the current AGI race.


Travis Good: I'm glad you asked that question because I actually think that's an absolutely wrong analogy. Let me explain. First of all, we need to look at what actually happened historically. We thought we had a route to military superiority and dominance, but the reality was that these technologies leaked almost immediately. We may have this desire to completely change the balance of power, but reality re-establishes the balance of power. So I would ask people who pursue this strategy if they really think that's a realistic goal.


Second, the nature of machine learning research is so fundamentally different from physics research that this analogy is completely untenable. Specifically, I think there are two paradigms to look at: One is the institutional paradigm, which is that you have control over a small group of qualified scientists, which can be controlled by the state. The other is the network connection paradigm, which is that you have an extremely widely distributed knowledge base, the skill threshold required to enter this field is relatively low, and there are a lot of people around the world who can participate as long as you give them computing resources. So from a network analysis perspective, the idea of trying to implement an institutional model on machine learning is completely ridiculous. If you follow this kind of policy, you are actually locking out all machine learning research in the United States for three years, which is a huge loss to the public's economic benefits, and then four years later, or even sooner, the technology will be stolen by a foreign government. So you have a lot of security measures, but it won't bring any benefits.


Ben Fielding: Just looking at the example of nuclear weapons, we ended up with a state of mutually assured destruction. You assume that an open approach would be disadvantageous, but in fact if it was open, we might have achieved mutually assured destruction without using these technologies. Does there need to be a proof point to show that advantage? Or can we skip the intermediate stages and go straight to a state of mutually assured destruction without going through the intermediate stages that are very harmful to the world?


Michael Rinko: I think you all make good points. I agree that the advent of nuclear weapons directly contributed to long periods of peace. We have avoided major wars since World War II in large part because you can now destroy entire countries. I think it is a useful deterrent and now multiple parties have it. You could say that if everyone had AGI, then that would lead to the safest world. But I don't think that is a lesson. If we go back to nuclear weapons, if you extend that logic, do you want every country to have nuclear weapons? Do you want Hamas to have nuclear weapons now? Do you want the Taliban in Afghanistan to have nuclear weapons?


There are many dangerous moments in history, like there was an officer on a Russian submarine who thought World War III had already started and had the power to decide whether to launch a nuclear bomb on the United States. He decided not to. It was just one person, but if you extend that capability to many people, the odds of pushing the button increase. So, I think one of the redeeming aspects of nuclear weapons is that they are really hard to make. I agree with you, Travis, these models are not hard to make. So I think the analogy is flawed in some ways, but overall you still want to limit these super powerful, potentially destructive technologies to only a few people.


Travis Good: I agree with you.


Pondering Durian:Or you just need to make sure you stay ahead of the curve, right? Whether it's military capabilities or cyber capabilities, there will always be potential attack vectors, but if you can stay one step ahead, then you can counter those threats. Like Hamas fired those rockets, but Israel had the Iron Dome defense system. The threat model is constantly evolving, but defenses are also constantly improving. Additionally, I think there is a correlation between how wealthy the world is and willingness to go to war. Wealthy people are generally less interested in war or destruction. There have never been wars between wealthy democracies. There are ideological reasons for this as well as wealth reasons. If the benefits of AI can be effectively permeated through the global economy, and those benefits don't just accrue to Microsoft shareholders, then we could actually have a very good outcome, but it's a very difficult balance to strike. I agree that there are many threats and that things could go completely wrong, but I also think that overall it's optimal to let this technology spread with good governance.


Ben Fielding: It sounds like we all agree on that, because in this view, the best thing for a country to do is to invest heavily in this area and stay ahead of other countries. Countries need to stay ahead by funding technology and efforts, not by pulling back. This is usually how defense research is done, investing a lot of money to explore the next frontier technology, and it is usually beneficial.


Michael Rinko: How does this work now that the commercial sector is so far ahead of the government?


Travis Good: There are two things here that I think we need to sort out. For me, this is also why I always get a little excited when I talk about AGI, because I think AGI is primarily an economic paradigm, it is a labor replacement technology. It has some military applications.


Michael Rinko: But the military is the largest "company" in the world. They employ more people than any company, so there is obviously a dual use.


Travis Good: I agree that dual use exists, but any time you think about shutting down a technology like this, you have to think about the hard power and soft power aspects of it. Let me give you a scenario to get you thinking about the implications. If we completely locked down AGI and only let the military use it, and denied everyone economic benefits, we would actually lose economic competitiveness. Our military strength depends on our economic capabilities. If we were in debt or were far behind other countries in exploiting AGI technology, we would actually lose any war. Ultimately, the country that is both researching military applications and deploying them in the broader economy will win.


Michael Rinko: I agree with that conclusion. I may have a different view of AI or AGI, to me, what we are building is intelligence, and intelligence is the input to everything. If you had unlimited intelligence, it's like electricity, you don't know what you would do with it, it's just a force in the world. It's crazy to think that AI wouldn't have dual use, and governments are certainly interested in this and are already paying attention. Going back to PD's original question about whether the revenue from these models can catch up with the expenditure, I think it is more important to focus on whether the capability will reach a plateau. As long as these cutting-edge labs, such as OpenAI, can launch more advanced models every year, and the hype continues, the revenue will catch up. We have seen reports that OpenAI now has an annual revenue of 2-3 billion US dollars, which is an unprecedented growth. But ultimately, the government will step in and directly fund these labs because it will become a national security issue. The government will not focus on revenue, but on the capability of the model because this is a national security angle.


Pondering Durian: But that's a different argument. We're talking about market forces. If you think that the government is going to get involved, then the whole dynamic changes, and the dynamics of the capital markets are actually thrown aside. You can tax any user and use it for military applications. But my view is that this is not about capability, it's about capability relative to cheaper alternatives. If I were an investor, I would not continue to pour $100 billion, $1 trillion into OpenAI and Microsoft to pursue the next generation of models. If they weren't capturing any value, then investors wouldn't do that. So my view is that the question of capability is not absolute, it's relative. It's a question of capability relative to everybody else's capability, and the cost of achieving that capability.


Ben Fielding: I think the question of moat is an important question for a lot of the big AI companies. We watched OpenAI try to figure out their moat, and they seem to be still figuring it out. Is it this ecosystem? Is it ChatGPT? Probably not. Is it their ability to build an ecosystem? Probably yes. I think it's a mechanism to drive openness from a strategic perspective for profitable companies, and that's what Meta is doing.


Meta recognizes that their moat is their ability to distribute these models to users. They're able to apply these models to actual real-world situations, not just trying to make money from the models themselves. So they drive open source models, not out of altruism, but for actual strategic gain. Meta complements their product by commoditizing the models, even if they're the only one with the ability to distribute them on a social network. This drives openness in the technology itself, but also allows Meta to continue to capture other resources.



The trend we're seeing is that what really matters is the actual ability to distribute. OpenAI will try to provide software that leverages intelligence for distribution, but pretty soon, all the companies that have the ability to distribute will want to break away from OpenAI. If there's no real moat in the technology itself, they'll be driven to do that. The more money they make, the more they'll want to bring that in-house. And there doesn't seem to be much that's stopping that from happening, other than capturing the underlying resources, like compute resources or from a regulatory perspective.


Pondering Durian: Ben, I think one of the things you said earlier that I agree with and is correct is that this is a distribution game. The unfortunate reality is that these five or six companies have already won on distribution. We're actually thinking about how to provide an infrastructure layer. But if you think that distribution is the ultimate moat, then are you actually agreeing with me?


Travis Good: I have an answer, if you don't mind, Ben. I think we have to bring in a model of social friction. You have to think about what happens when OpenAI takes action. Take a medical coder, for example, they listen to a doctor's notes, look at the documents, and then figure out the appropriate insurance code. It's a tedious job, but if you're efficient, you can make a lot of money. Let's say OpenAI comes in and suddenly buys out the entire medical coding field, which is possible, and they have great models, models with infinite attention might be better at this job than a person. So what happens to society when that entire profession disappears? People are going to be very hostile to these companies, and it's going to hurt their distribution. People are going to look for alternatives because everyone is thinking about their own moat, and they're going to start calculating, if I give all my data and my workflow to OpenAI, tomorrow my business model might become their business model.


Pondering Durian: But Travis, everyone still uses Amazon, everyone still uses Facebook, everyone still uses Google. That's the problem. I agree with you, but consumers don't care because Amazon gives them the cheapest product.


Ben Fielding: Are you saying that the innovator's dilemma as a concept is dead? Because that would mean these companies will always stay ahead.


Pondering Durian: No, it's just not clear to me how the system changes. Right now it seems like AI is just making the products and distribution of existing companies better, Meta can enhance their products, Google can enhance their products.


Travis Good: Let me approach this from a different angle. Imagine that today's companies are very inefficient because they are configured to organize humans. But the companies of the future will be configured to organize agents, which are small teams and large numbers of agents. So there's an argument that the big players are actually obsolete. If you give AI technology to smaller, more nimble companies that can organize correctly, they will win.


Pondering Durian: I totally agree, it's a very interesting paradigm. But is it the case that the players who are ultimately more nimble win? Or is it that the large companies, with their existing cash flows and distribution capabilities, will drastically reduce their workforces. So it could be the innovator's dilemma, where it's not feasible to scale down or reduce the number of employees, so new players emerge and bring a combination of creativity and agency and humans. But in reality, I think you run into some social constraints, like the capture and distribution of value.


Michael Rinko: I think we're having a really interesting discussion right now. I think it's important to think about who can keep driving this flywheel, who can keep getting the money and putting in the compute and paying the top researchers and keeping the flywheel going. I have some thoughts that I'd like to hear from you guys. Ben, I think the Meta example is really interesting, I think Meta is actually a great example of how revenue matters. Meta invests a lot of money to train these models and then gives them away for free. They spend millions of dollars and then give them away for free, not charging you a penny. I think at some point, if you use the API a certain amount of times, they'll charge you, but it was completely free in the beginning. They're not making any money from it right now.


Pondering Durian: But Michael, this isn't charity, they're doing this because they don't want to be dependent on Google and Apple. Mark got burned by ATT once, and he doesn't want to go through that again.


Michael Rinko: I understand the bet, but it’s a big one because they haven’t made any money from it yet.


Ben Fielding: This is applied to Meta’s products via APIs.


Potential Disruption of Decentralized AI and Crypto


Pondering Durian: Yeah, but how do they make money from it? Michael, you’re actually backing up my point that models are being commoditized. If Meta is investing tens of billions of dollars, what does that mean for OpenAI’s business model?


Michael Rinko: That’s exactly my point, I think they’re playing different games. Meta is giving away models for free, OpenAI is building leading edge closed models. Meta gives away the model for free, and OpenAI can go to a drug discovery lab and say, we'll run inference for you for a month to synthesize a certain drug, and we have the best model in the world, with the best inference, X, Y, and Z features, the best agent, whatever the fad is. That drug discovery company will pay OpenAI whatever it asks for. It's a completely different business model than selling ads on social media platforms to cover costs.


Pondering Durian: I agree, but it depends on whether OpenAI can continue to stay ahead of Meta and all the other players. If Meta also invests $20 billion in next-generation R&D and hires a lot of researchers from OpenAI, and then gives these models away for free, then anything OpenAI wants to charge for will be thrown aside. So, I'm not saying OpenAI can't be a great business, I'm just saying they do have a risk of commoditization, and it depends on whether they can continue to stay ahead.


Michael Rinko: They trained GPT-4 two years ago, and it's still one of the most powerful models in the world. So in my opinion, they have a two-year lead.


Travis Good: I disagree with that a little bit. There have been many releases of GPT-4, and in fact, the current model has surpassed the original released GPT-4. That's the problem with closed models. These companies claim they are the same model, but in fact they are not and can change without notice. So if you rely on these models staying the same, you're in trouble.


Michael Rinko: My understanding is that GPT-4 was pre-trained two years ago, basically reading everything on the internet. Since then, they've been doing iterative post-training, improving performance. But to me, it's amazing that the base model was trained two years ago and is still competitive with today's state-of-the-art models, which shows that they are really ahead.


Pondering Durian: I agree with that. My point is that there are no natural moats other than access to talent and compute resources. In the Web 2.0 era, you could benefit from network effects, and even if you had poor execution, like Twitter, you would still keep acquiring users and have a strong momentum law. But the world you're describing would require OpenAI to continue to invest in a super-aggressive way to maintain its lead, both in their talent and their compute resources, which is why they had to go to Microsoft and Apple to do deals. So, I think they can stay ahead, but it's going to be a lot harder than it was in the Web 2.0 era because there were some natural moats there.


Ben Fielding: I think it comes down to the specific use case. We talk about distribution being a moat, but distribution is not just about access to users, it's about the specific use case. We talked about OpenAI and they could apply GPT-4 to the drug discovery space. But people have actually been applying machine learning to drug discovery for years. The idea that the companies that are specifically doing that, that GPT-4 is going to outpace these companies that are designing models specifically for drug discovery is unrealistic. It shows that it's a great marketing tool, which means OpenAI might get some contracts. But Isomorphic Labs, for example, Demis's attempt specifically for this is probably more likely to be targeting the drug discovery use case directly than a company like OpenAI that seems to want to get into all potential use cases, and they'll be outperformed by companies that are specifically targeting those markets.


Michael Rinko: I think that's a really interesting question. How do you think about the future of AI? Will the most general form of intelligence win? Or will it be, as Ben said, smaller, more specialized models that focus on specific use cases win in those specific use cases, and general models are average in those specific areas, and specialized models are very good at specific things?


The Unique Advantages of Encryption in AI


Ben Fielding: I have a pretty skeptical view. At the end of the day, this is just a technology that can be applied in a specific area, like a data access tool. You can think of the model as an extremely efficient way to compress information, which works very well in many areas, and can exhibit logical functions in things like language models and general intelligence.


The general intelligence part is really a routing mechanism underneath it in my opinion. It's just a better way to interact with the data in the world. I think you can apply these technologies in a lot of different ways, and you don't have to have a general intelligent routing mechanism to achieve every use case. Similar to the past techniques can be applied in many different ways in many different areas.


But I think the focus we have right now is that people think you have to have a brain-like base model to achieve good results. I don't think so. If you look at the history of machine learning, you will see that it is applied in different areas and in different ways. I think our current focus on Transformer models is just temporary, and once we find that they don't magically solve all problems, we will return to diverse application of techniques again.


Travis Good: I agree with a lot of what you said, and I also think it's all about tool use. I think a very good base model, combined with excellent tool use skills, will win everything. The reason is that it can apply all the specialized models to serve a larger goal. So I think it will absorb them in this way, but this does not contradict any of the branching points. I agree that specialized models may have advantages in many use cases because you can't train or fine-tune them for certain specific cases. For example, GPT-5 will not be AlphaFold.



Tommy: Hey guys, I have a question. We're talking about the OpenAI vs. Meta competition right now, but we haven't made it clear how crypto AI can win here or could win. I want to spend 10 to 15 minutes discussing this. We talked about power and real estate issues, like not having enough space in data centers or not having enough power, or as Pondering Durian said, the mismatch between funding and revenue and expected output, which could be a fatal blow to centralized AI. I hope we can discuss how crypto AI could win, and if you think it can't win, that's fine, but I want to spend some time on this. PD, we can start with you and then go in order.


Pondering Durian: Sorry, I just disconnected. Going back to our previous discussion, I think disruption often comes from the edge. Decentralized AI provides some capabilities that closed source AI will never be able to replicate. The performance today may be strong because they have a vertically integrated approach to scaling, proprietary data and a lot of computing resources because they have monopoly cash flow from the first generation of the Internet. I think crypto provides a more open, transparent layer that others can build on top of. If you value transparency, composability, and the potential ecosystem of applications where you can verify the underlying models and datasets. Then as decentralized computing and inference performance improves, you will see more and more people want to join this ecosystem. So I think this is a classic disruption framework, there is a solution that is moving fast in one direction, but also has some other advantages that edge customers are using today due to its unique characteristics and performance will improve over time. If the performance reaches comparable levels, it will provide a superior solution.


That's my quick summary, I'd love to hear from Ben and Travis.


Ben Fielding: Yeah, I want to answer that question directly, and it's going to be a very biased answer, given that what we do at Gensyn is very closely tied to decentralized technology.


But in general, I think the biggest power of decentralization and decentralized technology is that it enables value flows that are very hard to disintermediate, rent seek, and capture. We talk a lot about the resources that underpin machine learning, and we think of them as a couple of pillars of these resources. Each of these has different dynamics, whether they're already open, or whether they're potentially captured by a big company that then blocks access to other people. Crypto allows us to create markets on these resources that remain credibly open, exactly through the mechanism of crypto.


Obviously, we think a lot about compute resources, which is the space that we build in. But if you think about the future of markets, if we can create a way for the demand from the actual users of a resource to flow directly to the people who own and provide that resource, then we can create a more competitive, more liquid market on that resource, which could ultimately proxy energy markets around the world just like Bitcoin did. You can do this in a similar way with compute resources as the underlying resource for AI. If I have a particularly cheap source of electricity, or even a lot of green electricity is particularly cheap because there's no way to effectively use it, if I can buy a GPU, plug it in, and immediately access the demand side of GPU compute, I'll do it because I'll get revenue for it. Now I can't do that because the only way to do that is to become a cloud service provider. And the cloud service providers themselves don't encourage me to become a cloud service provider. And this cycle continues, and they end up buying up all the compute resources, rent-seeking on them, and setting oligopolistic prices. So that allows us to keep the market liquid and not allow it to be captured by centralized providers. I think you can do this with all the underlying resources for ML. Ultimately, you'll see more competition in the actual application stage, and I think we can all agree that's probably where the moat is. It's providing value to the user, not in a certain resource.


Tommy: Ben, quick question before Travis and Mike answer. I want to confirm your point and correct me if I understand it wrong. Your point is that the money in hyperscale data centers and the revenue or results of the products they build are not that sustainable and we are not getting the results we want. What if we could build a meta-network that connects all the GPUs, that is, ultimately like electricity, anyone can access it, thereby creating a maximum surface area to create the most valuable AI applications or agents, rather than one centralized company choosing the pilot projects and competing on full AGI. Is that an appropriate summary?


Ben Fielding: To some extent, yes. I think when it comes to the issue of scale, whether scale is the factor that determines the winner, we have discussed it. It has kind of entered the realm of debate. But ultimately, you can think of it this way: What is the least extractive way for humans to use resources? Using electricity and turning that electricity into knowledge through GPUs is the least extractive way we can do it without a lot of middlemen leading to someone extracting value in the process. I think crypto and decentralization gives us a track to create a system of absolute minimal extraction. The only people extracting value in the process are the ones who are actually providing value.


For example, someone creates an interface to a model. If that interface competes with a lot of other interfaces, then that person can make a profit, but their profit is a fair market price that is reached in the competition. This happens at multiple stages in the chain, all the way to the stage where you get electricity. Currently we are in a situation where because computing resources are limited, it is not a commodity, but a limited resource that is captured by providers who are able to extract huge value. But crypto gives us a track for that not to happen. I think that this is a net benefit to the world no matter where AI goes. Whether or not scaling continues to be the best scenario, because general use of resources is still the most efficient way. A protocol without a profit motive just needs to sustain itself. So if it can sustain itself and can create the largest demand usage, it should outperform anyone who has to profit from it.


Travis Good: I'm a big fan of Nassim Taleb, and I think centralized structures are fragile. Their fragility is that they require trust. If I'm a medium-sized business and I use ChatGPT for customer service, OpenAI could change the behavior of the model tomorrow, which would change the behavior of my entire customer service department. That's a huge amount of control that you have to give up as a user of the application. Not to mention that if I offend OpenAI, they can shut down my entire customer service department. I have to have a lot of trust in this entity to not get hacked, not act maliciously, not act arbitrarily. We've seen Google shut down people's Gmail accounts for no reason, and people lose years of accumulated memories and lives because of it. So I think decentralized structures provide robustness and antifragility because they can be trustless. If I can get a really good model through the Ambient Network that's publicly trained, has known properties, and behaves predictably, I can trust that model to continue to run on the network and not be interrupted by arbitrary actions of centralized structures. As long as there is an economic incentive, the model will continue to run. This provides me with better security, which is a huge advantage of crypto AI. It allows me to get interesting capabilities without worrying about the implementation and structure of a specific organization. This composability is critical to the development of the future economy.

Michael Rinko: You all bring up really interesting points. My view on crypto AI is that when I was writing the report, I was trying to think about what problems crypto could solve for AI that couldn't be solved elsewhere. I think you all touched on some key points. For me, there are three. Number one, crypto is trustless. If you were an AI agent, would you trust your capital to JPMorgan or would you trust a digital wallet that you controlled yourself, a public-private key on a blockchain? You don't have to trust anyone in crypto, and I think that's a very attractive feature for AI. For humans, that can be a scary feature because there's no one to call for help. But I think for AI, that's going to be a very attractive feature. Number two, crypto is deterministic. When you execute a piece of code in crypto, you know exactly what that piece of code is going to do, there's no ambiguity. Whereas in the real world, when I call my bank to wire money, I don't know if the money is going to be wired today, tomorrow, or next month. We've all been there. The real world is random and full of uncertainty. My bet is that AI is not going to like that uncertainty, it's going to prefer deterministic execution. Crypto offers that. Third, AI can enable hyper-capitalism through crypto. A silly but probably the best example is Meme Coin. Crypto can financialize anything and it does. I think that's a unique property that AI can leverage to accumulate resources. So, I think for these reasons, I'm very bullish on the combination of crypto and AI, and I think we're just beginning to see the value in it.


Pondering Durian: These are all great points, Michael makes a great point. I would also add that I mentioned in the article that existing institutions are largely a product of the industrial age, so there are a lot of limitations on protocol adoption. If more parts of the economy move to agents or agent-based networks, then a lot of the infrastructure we're building will make more sense. Today, there's a lot of friction in how businesses interact with protocols, especially those with large legal departments, who are always more conservative and prefer to choose AWS service level agreements or contracts because that's the way the world works for them. But if we outsource more decisions and intelligence in the future, I think those decision makers will start using more composable, trustless infrastructure, as mentioned earlier. So if we start to see this shift over the next three to five years, it's going to be really good for Web3 infrastructure and applications. Because it really requires both sides of the network to evolve, and we're only just starting to see the demand side really mature and evolve in a way that makes sense for the Web3 infrastructure that's been built over the last five or six years.

Ben Fielding: I think there's a really interesting analogy here, which is the analogy of Web3 and AI, which are both basically replacing human costs with code costs. Web3 came along and replaced legal contracts and human trust mechanisms and so on with smart contract execution. That's an example of a cheaper way to do things with human labor. And the same is true with AI, as you said, it replaces expensive human costs with more complex code. So the combination of the two is to replace expensive human costs with code to the maximum extent possible. And then you ask, what's left at the end? It's the trust part, and in the real world, people do value the human trust part, and they're willing to pay more for it. That's why enterprises and legacy buy IBM products. Ultimately, you find that you buy AWS products because you're willing to pay a premium for the human part. This has been clearly cut and highly financialized, and now you know what you're paying for. It's no longer a fuzzy purchase like in the traditional cloud world, but a very clear purchase behavior. I think we can take this a step further and talk about the future of AI, where it's more of an explicit financialization of the human experience, where our values change, where we're willing to pay more for explicit human experiences, even irrationally choosing things that are best for us, just because we actually value it. Like I want to buy this handmade thing because it's handmade by a person. I want to buy this cloud provider's product because there's a person there. Even though this crypto product with automated settlement and AI running it is actually better for me, it's cheaper, more efficient, and it gives me better results, but I'll choose something else because I'm human and that's what I want to buy.


Michael Rinko: Ben, can I ask you a question? I think your insights are really interesting. A common perception is that decentralized computing or something in the current AI space will be cheaper when it's decentralized, and that's a selling point. But I think the counter-argument is that, as you just said, cost is not the primary consideration for these companies, especially when you're talking about the most valuable companies in the world, and they will spend extra money to avoid the hassle. If there's an existing relationship, they will do it. Do you think there’s a tipping point where these companies can save enough money that cost actually becomes the primary driver? Or is it just a reputation issue for these crypto AI companies, and it takes a few years for these GPU providers to build up their reputation and connections with these companies? How do you think this is going to play out? Is there a clear tipping point where the competitive dynamics change between centralized and decentralized offerings?


Ben Fielding: It’s really the classic friction vs. benefits tradeoff with any new technology. You need to ask yourself if you’re willing to go through the friction of adopting a new technology for the potential benefits, like cost savings, increased scale, or increased access. We have to go through this transition: some people have to be willing to go through the additional friction to get those potential benefits. Once you get past that initial friction, you can reduce that friction without having huge switching costs.


But, no one has really solved this tradeoff yet. We see a lot of speculative projects around crypto AI, and a lot of people say there will be huge benefits in the future, and a lot of people are betting on this future. But clearly no one has really captured the real world value demand, there aren't a lot of real use cases that can say this is bringing real value to Web2. And I think that's what needs to happen. Frankly, I think we need to focus on building these technologies and making them better in specific real use cases, rather than focusing on incentives and startup work, which is just a distraction in our opinion.


Of course, at some point you need to do some incentives. If you're building a marketplace, you need to do some startup work, but it's not the main work. The main work is to create a product that is actually valuable. I think we've gone through a period of distraction, but there are a lot of people behind the scenes who are building these products. Once these products are launched, we're going to see a surge, but it has to cross the barrier of friction and cost.


Pondering Durian: But I think you do need network liquidity. Crypto is really about networks. So you need the demand side and the supply side, and they need to be there at the same time. It's always very difficult to build a market at the beginning in order to create that experience. The supply side won't accelerate until you see the demand side. We still have a very high friction onboarding experience for users. So I agree that it's very important to focus on the product, but a lot of the benefits will only be realized when there is a critical mass, not 5 billion, but at least enough crypto wallets, that the supply side will meet the demand side. Because right now the number of users is still very limited. So I do think network liquidity is very important.


Travis Good: I just wanted to chime in and I think this is a very important point. PD, you know, if we combine crypto with AI, it's actually bringing programmable economics into decision support. I think crypto has been lacking in decision support before. Large language models are able to ingest a lot of information and make relatively simple choices based on all the inputs, which is something that was unthinkable before. If I tried to write a contract on Ethereum to parse a news story and decide whether the sentiment is positive or negative, I would have to do a lot of hacking, not to mention trying to write a consistent and reasonable summary that I could make an economic decision. So we're actually introducing a level of capability and reasoning that makes the underlying economic APIs even more compelling than before.


Ben Fielding: In terms of this market launch issue that we were talking about, it's true that at some stage you're going to need some launch work. I'm not saying that you don't need to balance supply and demand, but I think your use case will greatly influence how much launch work you need to do. For example, if you look at Helium, the network coverage they provide is subject to Metcalfe's law in the telecommunications world, which means that this network is only valuable at scale, so you have to launch the supply side very heavily to provide any value to users.


But there are some use cases that are not constrained by that. For example, access to GPU computing resources, if I have a GPU and a user who needs to train a model on that GPU, I can make transactions and get real-world value from this network. Yes, it's not at scale yet, but you can scale up gradually without having to go through a massive launch process. I think we're in a weird situation right now where the competition for GPU networks is all competing for the number of GPUs, but no one is actually using those GPUs, where is the demand-side usage?


So we've fallen into the trap of collecting as many GPUs as possible, which is the wrong metric, the right metric is actual usage, getting users to send tasks to the device and get results. If this works at a small scale, I think it's actually better, rather than unilaterally launching a large number of supply sides and not being able to serve the demand side. This only works if the supply is a linear value commodity. However, fortunately, ML computing happens to be one of those cases. I think if you're in a network bandwidth model, you have to find a way to launch the supply side, and that's the game you have to play, and you have to make sure that in a short period of time, the demand side can take over, otherwise you won't win.


Michael Rinko: Where do you think the most likely demand is coming from in the near term?


Ben Fielding: Decentralized compute networks.


Ben Fielding: I think demand is likely to come from a few different areas. One is seed to Series A startups that don't currently have access to cloud providers, and that's one area of demand. Others are individual users who don't have local compute resources, but want to fine-tune models, like using open source models that they have access to, but don't have access to compute resources to train. A little bit further down the road (but probably not too far off) is the need for collaborative training of models. A community can come up with an idea for a new type of model, but don't have access to compute resources, and now they have access to that. They can also use the various tools that Web3 provides, like pools of funds and audits, to collectively activate those resources without having to trust a single party. This way, we can have more collaborative model development on compute networks, data networks, and to some extent, expert knowledge incentive networks. These are networks where people can propose models and get rewarded in the future for proposing that model because there is a record on the blockchain that they created and published it. This incentivizes collaborative work as we gradually attribute value back to the source of the resource or idea, and that’s when the flywheel effect really kicks in.


Until then, we can see simple usage of cloud providers currently, which is often either overpriced or simply inaccessible.


Travis Good: I think as blockchain needs to be seen as a competitive capability, you're going to see a massive increase in utilization of these networks.


Ben Fielding: I totally agree. When models start needing access to resources to trigger themselves and other models, things get exponentially faster because you eliminate the same "human world costs" that we've talked about in other areas. At this point, you need highly liquid access to these resources, otherwise you're always going to be limited by the efficiency of the resource market. So if we want to progress as fast as we can, we have to make these markets as efficient as possible. I totally agree that once models actually start accessing resources, the whole situation will fundamentally change, and that's something that I'm really excited about. I think that's going to be a really big moment.


Michael Rinko: Travis, I'm glad you mentioned that, and it's something I hope we'll talk about at some point. I don't want to sound too sci-fi, but I think people are underestimating how quickly this is going to happen. I saw a tweet the other day from a researcher at OpenAI, and he basically said how surprised they would be if you took someone from 10 years ago and told them today that we have a program that can translate between any language in real time, and that anyone can use it for free, all over the world. But he said that this huge breakthrough didn't change anything, the world is still the same. In other words, NVIDIA's stock price went up a lot, but other than that, we still have breakfast every day, I still drink coffee. So I think for crypto, we've also had these huge AI breakthroughs, but crypto is still going on, Bitcoin is not even at its all-time high right now. So when is there going to be a huge turning point? When is there going to be exponential growth?


My current view is that this turning point will come when we have autonomous agents. I think it's obvious to most people in the crypto community why these agents would prefer blockchains over traditional infrastructure. It's hard for me to imagine what it's like to go from zero to one, where there might not be any agents at first, but all of a sudden there might be countless agents, millions, billions of instances of intelligence, independently participating in economic activity. I think a lot of this is going to happen on blockchains, and the potential is huge.


Pondering Durian: I do feel like this is like the USV chart showing the relationship between infrastructure and applications. At one point, the guy at Altimeter basically said that if you look at cloud computing, $400 billion went to applications and about $50 billion went to semiconductor companies. And in generative AI, we're actually in the opposite situation right now, with NVIDIA accumulating $75 billion and OpenAI only accumulating $5 billion. I think that's going to flip. Just like cloud computing, we can expect an upcoming explosion of applications, and most of the value capture will start to flow up to the application layer. So I agree with that 100%.


Travis Good: I think that's why this ecosystem is so important and why I'm really happy to see all the different approaches and healthy competition in this space. Because I think there's going to be a huge demand for all of these approaches and it's going to be a big party. And I believe those who can see that are going to be well rewarded. And I also hope that those of you who are listening are impressed by our perspectives because I hope that you can be financially rewarded as well.


Original link


欢迎加入律动 BlockBeats 官方社群:

Telegram 订阅群:https://t.me/theblockbeats

Telegram 交流群:https://t.me/BlockBeats_App

Twitter 官方账号:https://twitter.com/BlockBeatsAsia

举报 Correction/Report
PleaseLogin Farcaster Submit a comment afterwards
Choose Library
Add Library
Cancel
Finish
Add Library
Visible to myself only
Public
Save
Correction/Report
Submit