Buffed AI+Crypto narrative, "Verifiable AI"

24-10-22 16:42
Read this article in 8 Minutes
总结 AI summary
View the summary 收起
Original author: @lukedelphi
Original translation: zhouzhou, BlockBeats


Editor's note:With the recent increasing influence of AI in the field of cryptocurrency, the market has begun to focus on the issue of AI verifiability. In this article, several experts in the fields of encryption and AI analyze how technologies such as decentralization, blockchain, and zero-knowledge proof can address the risk of possible abuse of AI models, and explore future trends such as reasoning verification, closed-source models, and edge device reasoning.


The following is the original content (the original content has been reorganized for easier reading and understanding):


Recently recorded a roundtable discussion for Delphi Digital's AI Monthly event, inviting four founders focusing on the fields of encryption and AI to discuss the topic of verifiable AI. The following are some key points.



Guests: colingagich, ryanmcnutty33, immorriv, and Iridium Eagleemy.


In the future, AI models will become a form of soft power, and the wider and more concentrated their economic applications, the more opportunities for abuse. Whether the model output is manipulated or not, the mere perception of this possibility is already very harmful.


If we start to think about AI models the same way we think about social media algorithms, we will be in big trouble, and decentralization, blockchain, and verifiability are key to solving this problem. Since AI is essentially a black box, we need to find ways to make AI's processes provable or verifiable to ensure that it has not been tampered with.


This is exactly the problem that verifiable reasoning solves, and although the panelists agreed on the problem, they took different paths to the solution.


More specifically, verifiable reasoning includes: my questions or inputs have not been tampered with; the model I used is the one I promised; the output is provided as is, without modification. Actually, this definition comes from @Shaughnessy119, but I like its simplicity.


This will help a lot in the current truth terminal case.



ZK is undoubtedly the most secure approach using zero-knowledge proofs to verify model outputs. However, it also comes with some trade-offs, with computational costs increasing by 100 to 1000 times. In addition, not everything can be easily converted to circuits, so some functions (such as sigmoid) need to be approximated and may have floating point approximation losses.


Regarding computational overhead, many teams are working to improve state-of-the-art ZK techniques to significantly reduce the overhead. Although large language models are large in size, most financial use cases are likely to be relatively small, such as capital allocation models, making the overhead negligible. Trusted Execution Environments (TEEs) are suitable for use cases that have lower requirements for maximum security but are more sensitive to cost or model size.


Travis from Ambient talked about how they plan to verify inference on a very large sharded model, which is not a general problem, but a solution for a specific model. However, since Ambient is still in stealth mode, this work is confidential for the time being, and we need to pay attention to the upcoming papers.


The optimistic method, that is, not generating proofs during reasoning, but letting the nodes that perform reasoning stake tokens, and deducting the staked tokens if questioned and found to be improperly operated, has received some opposition from the guests.


First, to achieve this, deterministic output is required, and in order to achieve this goal, some compromises need to be made, such as ensuring that all nodes use the same random seed. Second, if there is a risk of $10 billion, how much stake is enough to ensure economic security? This question still has no clear answer in the end, which highlights the importance of letting consumers choose whether they are willing to pay for the full proof.


Regarding the issue of closed-source models, both inference labs and aizel network can provide support. This has sparked some philosophical debates, and trust does not require knowledge of the model being run, so private models are unpopular and contrary to verifiable AI. However, in some cases, understanding the inner workings of the model can lead to manipulation, and the only way to solve this problem is sometimes to make the model closed source. A closed-source model that works 100s or 1000s of times and still looks reliable, even though its weights cannot be accessed, is enough to give confidence.


Finally, we discussed whether AI inference will move to edge devices (like phones and laptops) due to issues like privacy, latency, and bandwidth. The consensus was that this shift is coming, but it will take several iterations.


For large models, space, compute needs, and network requirements are all issues. However, models are getting smaller and devices are getting more powerful, so this shift seems to be happening, but not quite there yet. However, today, if we can keep the inference process private, we can still get many of the benefits of local inference without the failure modes.


Original link



欢迎加入律动 BlockBeats 官方社群:

Telegram 订阅群:https://t.me/theblockbeats

Telegram 交流群:https://t.me/BlockBeats_App

Twitter 官方账号:https://twitter.com/BlockBeatsAsia

举报 Correction/Report
This platform has fully integrated the Farcaster protocol. If you have a Farcaster account, you canLogin to comment
Choose Library
Add Library
Cancel
Finish
Add Library
Visible to myself only
Public
Save
Correction/Report
Submit