Original title: AI <> Crypto Projects That Aren't Complete Bullsh*t
Original author: 563, former Bankless researcher
Original translation: Deep Tide TechFlow
Navigating the intersection of encryption and artificial intelligence.
When looking for new alpha information, we inevitably encounter some junk information. When a project can quickly raise 5-6 figures with just a semi-clear introduction and some decent branding, speculators will seize every new narrative. And as traditional financial fields have joined the AI trend, the "crypto AI" narrative has exacerbated this problem.
The problem with most of these projects is that:
1. Most crypto projects don’t need AI
2. Most AI projects don’t need cryptocurrency
Not every decentralized exchange (DEX) needs an AI assistant built in, nor does every chatbot need an accompanying token to facilitate its adoption curve. This hard-wired marriage of AI and crypto almost broke me when I first dug into this narrative.
The bad news? Continuing down the current path and further centralizing this technology will only end in failure, and the plethora of fake “AI x Crypto” projects will hinder our ability to turn things around.
The good news? There is light at the end of the tunnel. Sometimes AI does benefit from cryptoeconomics. Likewise, there are real problems that AI can solve in some cryptocurrency use cases.
In today’s post, we’ll explore these key intersections. The overlap of these niche innovative ideas forms a whole that is greater than the sum of its parts.
Here’s my take on the different verticals in the “Crypto AI” ecosystem (check out Tommy’s post if you want to go deeper). Note that this is a very simplified view, but hopefully it helps us lay the foundation.
At a high level, here’s how it all works together:
· Data is collected at scale.
· This data is processed so that machines understand how to ingest and apply it.
· Models are trained on this data to create a general model.
· This can then be fine-tuned to handle specific use cases.
· Finally, these models are deployed and hosted so that applications can query them for useful implementations.
· All of this requires massive compute resources, which can be run locally or sourced from the cloud.
Let’s explore each of these areas, with a particular focus on how different cryptoeconomic designs can actually improve upon standard workflows.
The debate over “closed source” vs. “open source” development approaches dates back to the Windows-Linux debate and Eric Raymond’s famous “The Cathedral and the Bazaar” theory. While Linux is widely used among enthusiasts today, about 90% of users choose Windows. Why? Because of incentives.
Open source development has many benefits, at least from the outside. It allows the maximum number of people to participate in and contribute to the development process. But in this headless structure, there is no unified directive. CEOs don’t proactively get as many people using their products as possible to maximize their bottom line. In the open source development process, there is a risk that a project will evolve into a “chimera” that splits off in different directions at every intersection of design philosophies.
What is the best way to align incentives? Build a system that rewards behaviors that promote the achievement of goals. In other words, get money into the hands of actors who can get us closer to our goals. With crypto, this can be hard-coded into law.
We’ll take a look at some projects that are doing just that.
“Oh come on, that again?” Yes, I know the DePIN narrative is almost as over-the-top as AI itself, but hang on for a second. I’d like to believe that DePINs are a crypto use case that really has a chance to change the world. Think about it.
What is crypto really good at? Removing intermediaries and incentivizing activity.
Bitcoin’s original vision was a peer-to-peer currency designed to cut out banks. Similarly, modern DePINs are designed to exclude centralized power and introduce provably fair market dynamics. As we’ll see, this architecture is ideal for crowdsourced AI-related networks.
DePINs use early token issuance to increase the supply side (providers) in the hope that this will attract sustainable consumer demand. This is intended to solve the cold start problem for new markets.
This means that early hardware/software (“nodes”) providers earn a lot of tokens and a little cash. As users leverage these nodes (in our case, machine learning builders) bring in cash flow, this begins to offset the decreasing token issuance over time until a fully self-sustaining ecosystem is established (which could take several years). Early adopters, such as Helium and Hivemapper, demonstrate the effectiveness of this design.
GPT-3 was allegedly trained with 45TB of plain text data, the equivalent of about 90 million novels (and it still can’t draw a circle). GPT-4 and GPT-5 require more data than exists on the surface web, so calling AI “data-hungry” is the understatement of the century.
Getting this data is incredibly difficult if you’re not one of the top players (OpenAI, Microsoft, Google, Facebook). The common strategy for most people is web scraping, which is all well and good until you try to step up. If you use a single Amazon Web Services (AWS) instance to try to scrape a large number of websites, you’ll quickly run into rate limits. That’s where Grass comes in.
Grass connects over two million devices, organizing them to crawl websites from users’ IP addresses, collecting, structuring, and selling them to data-hungry AI companies. In return, users participating in the Grass network can earn a steady income from AI companies using their data.
Of course, there are no tokens yet, but future $GRASS tokens may make users more willing to download their browser extension (or mobile app). Although they have already attracted a large number of users through an extremely successful referral campaign.
Perhaps more important than data is computing power. Did you know that in 2020 and 2021, China invested more in GPUs than oil? This is crazy, but it's just the beginning. Goodbye Petro, make way for Compute.
Right now, there are many GPU DePINs on the market, and they work roughly like this.
1. Machine learning engineers/companies that are in desperate need of compute.
2. On the other side are data centers, idle mining machines, and hobbyists with idle GPUs/CPUs.
Despite the huge global supply, there is a lack of coordination. It’s not easy to contact 10 different data centers and have them bid for your usage. A centralized solution would create a rent-seeking intermediary whose incentive is to extract the most value from each party, but crypto can help.
Crypto is very good at creating a market layer that efficiently connects buyers and sellers. A snippet of code doesn’t need to be accountable to the financial interests of shareholders.
io.net stands out because it introduced some cool new technology that is critical to AI training - their cluster stack. Traditional clustering involves physically connecting a bunch of GPUs in the same data center so that they can work together to train models. But what if your hardware is distributed across the globe? io.net worked with Ray (used to create ChatGPT) to develop cluster middleware that can connect non-co-located GPUs.
Also, the AWS sign-up process can take days, while clusters on io.net can be launched permissionlessly in 90 seconds. For these reasons, I can see io.net becoming the hub for all other GPU DePINs to plug into their "IO Engine", unlocking built-in clustering and a smooth onboarding experience. All of this is only possible with the help of cryptography.
You’ll notice that most of the ambitious decentralized AI projects (Bittensor, Morpheus, Gensyn, Ritual, Sahara) have explicit “compute” requirements — this is where GPU DePINs should slot in, decentralized AI requires permissionless compute.
Back to the Bitcoin inspiration again. Why do miners keep computing hashes quickly? Because that’s how they get paid — Satoshi proposed this architecture because it prioritizes security. What’s the lesson? The incentive structures built into these protocols determine the end products they produce.
Bitcoin miners and Ethereum stakers are the participants who absorb all of their native tokens because that’s what the protocol wants to incentivize — participants become miners and stakers.
In an organization, this might come from the CEO, who defines the “vision” or “mission statement.” But people are fallible and can lead a company off course. Computer code, on the other hand, can stay focused longer than the roughest wage slave. Let’s look at a few decentralized projects where built-in token effects keep participants focused on lofty goals.
What if we let Bitcoin miners build AI instead of solving useless math problems? That way, you get Bittensor.
The goal of Bittensor is to create several experimental ecosystems to experiment with, with the goal of producing “commoditized intelligence” within each ecosystem. This means that one ecosystem (called a subnet, or “SN” for short) might focus on developing language models, another on financial models, and still more on speech synthesis, AI detection, or image generation (see currently active projects).
For the Bittensor network, it doesn’t matter what you want to do. As long as you can prove that your project is worth funding, the incentives will flow. This is the goal of the subnet owner, who registers the subnet and adjusts the rules of the game.
The participants in this "game" are called miners. These are the ML/AI engineers and teams who build the models. They are locked in a constantly audited "Thunderdome" and compete against each other to get the most rewards.
Validators are the other side who are responsible for conducting the audit and scoring the miners' work accordingly. If a validator is found to be colluding with a miner, they will be expelled.
Remember the incentives:
· Miners earn more when they beat miners in other subnets - this drives AI development.
· Validators earn more when they accurately identify high- and low-performing miners - this keeps the subnets fair.
· Subnet owners earn more when their subnet produces more useful AI models than other subnets - this drives subnet owners to optimize their "game".
You can think of Bittensor as a perpetual bounty machine for AI development. Budding machine learning engineers can try to build something, pitch to VCs, and try to raise some money. Or they can join one of the Bittensor subnets as a miner, make a killing, and earn a ton of TAO. Which is easier?
Some of the top teams are building on the network:
· Nous Research is the king of open source. Their subnet is breaking the mold in fine-tuning open source LLMs. They make the leaderboard impossible to manipulate by testing their models on a continuous stream of synthetic data (unlike traditional benchmarks like HuggingFace).
· Taoshi’s proprietary training network is basically an open source quantitative trading company. They ask ML contributors to build trading algorithms that predict asset price movements. Their API provides quantitative-grade trading signals to retail and institutional users, and is on a fast track to significant profitability.
· Cortex.t, developed by the Corcel team, serves two purposes. First, they incentivize miners to provide API access to top models (like GPT-4 and Claude-3) to ensure continuous availability for developers. They also provide synthetic data generation, which is great for model training and benchmarking (which is why Nous uses it). Check out their tools - Chat and Search.
If nothing unexpected happens, Bittensor reaffirms the power of incentive structures, all enabled by cryptoeconomics.
Now, let’s look at two aspects of Morpheus:
· Cryptoeconomic structures are building AI (crypto helps AI)
· AI-enabled applications enabling new use cases in crypto (AI helps crypto)
“Smart Agents” are simply AI models trained on smart contracts. They understand the inner workings of all the top DeFi protocols, know where to find yield, where to bridge, and how to spot suspicious contracts. They are the “auto-routers” of the future, and in my opinion, they will be the way everyone interacts with blockchain in 5-10 years. In fact, once we get to that point, you may not even know you are using crypto. You will only tell the chatbot that you want to move some of your savings into another investment, and everything will happen in the background.
Morpheus embodies this part of the “incentivize them and they will come” message. Their goal is to have a platform where smart agents can spread and thrive, each building on the success of the last, in an ecosystem that minimizes externalities.
The token inflation structure highlights four main contributors to the protocol:
· Code — Agent builders.
· Community — Building front-end applications and tools to attract new users to the ecosystem.
· Compute — Providing computational power to run agents.
· Capital — Providing their yield to fuel the Morpheus economic machine.
Each of these categories receives an equal share of $MOR inflation rewards (a small portion is also saved as an emergency fund), forcing them to:
· Build the best agents — Creators get paid when their agents are consistently used. Unlike the OpenAI plugin, which is provided for free, this approach pays builders instantly.
· Build the best frontends/tools - creators get paid when their creations are used consistently.
· Provide stable computing power - providers get paid when they lend computing power.
· Provide liquidity to projects - earn their share of MOR by maintaining liquidity for projects.
While there are many other AI/smart agent projects, Morpheus’s token economic structure is particularly clear and effective in designing incentive mechanisms.
These smart agents are the ultimate example of how AI is truly removing barriers to crypto adoption. dApp UX is notoriously bad (despite many improvements over the past few years), and the rise of LLMs has ignited the passion of every would-be Web2 and Web3 founder. While there are a ton of for-profit projects, great projects like Morpheus and Wayfinder (see demo below) show how easy it will be to conduct on-chain transactions in the future.
(see tweet for details)
Putting it all together, the interactions between these systems might look a bit like this. Note that this is an extremely simplified view.
Remember our two broad categories of "Crypto x AI":
1. Crypto helps AI
2. AI helps crypto
In this article, we mainly explored the first category. As we have seen, a well-designed token system can lay the foundation for the success of an entire ecosystem.
DePIN architectures can help kick-start markets, and creative token incentive structures can coordinate open source projects to work towards goals that were once difficult to achieve. Yes, there are several other legitimate intersections that I didn’t cover due to space limitations: · Decentralized storage · Trusted Execution Environments (TEEs) · Real-time Aggregation (RAG) · Zero-knowledge x machine learning for inference/provenance verification · When deciding if a new project is truly valuable, ask yourself: · If it’s a spinoff of another established project, is it different enough to be eye-catching? · Is it just a wrapped version of open source software? · Does the project truly benefit from crypto, or is crypto shoehorned in?
· Do we really need 100 crypto projects like HuggingFace (a popular open source machine learning platform)?
In this category, I personally see more fake projects, but there are some cool use cases. AI models can remove friction in the crypto user experience, especially intelligent agents. Here are some interesting categories to watch in the AI-powered crypto application space:
· Enhanced Intent Systems - Automated Cross-Chain Operations
· Wallet Infrastructure
· Real-time alert infrastructure for users and applications
If it's just a "chatbot with a token", it's garbage to me. Please stop promoting these projects for my sanity. Additionally: · Adding AI won’t magically make your failing app/chain/tool get product-market fit · No one will play a bad game just because it has an AI character · Putting an “AI” label on your project doesn’t make it interesting Where are we going · Despite all the noise, some serious teams are working towards the vision of “decentralized AI” and it’s worth fighting for. In addition to projects that incentivize open source model development, decentralized data networks open new doors for emerging AI developers. While most of OpenAI’s competitors can’t make large-scale deals with Reddit, Tumblr, or WordPress, distributed scraping can even the gap.
A single company may never have more computing power than the rest of the world combined, but with a decentralized GPU network, it means that anyone else has the ability to rival the top companies. All you need is a crypto wallet.
Today we are at a crossroads. If we focus on those truly valuable "crypto x AI" projects, we have the ability to decentralize the entire AI stack.
The vision of cryptocurrency is to create a hard currency that no one can interfere with through the power of cryptography. Just as this emerging technology began to gain popularity, a more formidable challenger emerged.
In the best-case scenario, centralized AI will not only control your finances, but will also impose biases on every piece of data we encounter in our daily lives. It will enrich a very small number of tech leaders in a self-perpetuating cycle of data collection, fine-tuning, and model injection.
It will know you better than you know yourself. It knows which buttons to press to make you laugh more, get angrier, and spend more. Despite appearances, it is not accountable to you.
Initially, cryptocurrency was seen as a force to counter the centralization of AI. Crypto has the ability to coordinate decentralized individuals working together to achieve a common goal. However, now that capability is facing an enemy more powerful than central banks: centralized AI. This time, time is of the essence, and we need to act quickly to resist the centralization trend of AI.
Original link
欢迎加入律动 BlockBeats 官方社群:
Telegram 订阅群:https://t.me/theblockbeats
Telegram 交流群:https://t.me/BlockBeats_App
Twitter 官方账号:https://twitter.com/BlockBeatsAsia