📢 Gate Square #MBG Posting Challenge# is Live— Post for MBG Rewards!
Want a share of 1,000 MBG? Get involved now—show your insights and real participation to become an MBG promoter!
💰 20 top posts will each win 50 MBG!
How to Participate:
1️⃣ Research the MBG project
Share your in-depth views on MBG’s fundamentals, community governance, development goals, and tokenomics, etc.
2️⃣ Join and share your real experience
Take part in MBG activities (CandyDrop, Launchpool, or spot trading), and post your screenshots, earnings, or step-by-step tutorials. Content can include profits, beginner-friendl
AI + Web3 Integration: Exploring New Opportunities in Decentralization and Computing Power
AI+Web3: Towers and Squares
TL;DR
Web3 projects with AI concepts are becoming attractive targets for fundraising in the primary and secondary markets.
The opportunities of Web3 in the AI industry are reflected in: using distributed incentives to coordinate potential supply in the long tail ------ across data, storage, and computing; at the same time, establishing an open-source model and a decentralized market for AI Agents.
The main application of AI in the Web3 industry is on-chain finance (crypto payments, trading, data analysis) and assisting in development.
The utility of AI+Web3 is reflected in the complementarity of the two: Web3 is expected to counteract AI centralization, while AI is expected to help Web3 break boundaries.
Introduction
In the past two years, the development of AI has been like pressing the accelerator, and this butterfly effect triggered by Chatgpt has not only opened a new world of generative artificial intelligence but has also stirred up a current in the realm of Web3.
With the support of AI concepts, the financing boost in the slowing cryptocurrency market is significant. According to media statistics, in the first half of 2024, a total of 64 Web3+AI projects completed financing, with the artificial intelligence operating system Zyber365 achieving a maximum financing amount of 100 million dollars in its Series A round.
The secondary market is more prosperous, with data from a certain crypto aggregation website showing that in just over a year, the total market value of the AI sector has reached 48.5 billion USD, with a 24-hour trading volume close to 8.6 billion USD; the benefits brought by the progress of mainstream AI technology are obvious, and after the release of OpenAI’s Sora text-to-video model, the average price of the AI sector increased by 151%; the AI effect is also radiating to one of the cryptocurrency money-making sectors, Meme: the first AI Agent concept MemeCoin ------ GOAT quickly became popular and achieved a valuation of 1.4 billion USD, successfully igniting the AI Meme craze.
Research and discussions about AI+Web3 are equally hot, ranging from AI+Depin to AI Memecoin, and now to AI Agent and AI DAO. The FOMO sentiment can no longer keep up with the speed of the new narrative rotations.
AI + Web3, this combination of terms filled with hot money, trends, and future fantasies, is inevitably seen by some as a marriage arranged by capital. It seems difficult for us to discern whether beneath this magnificent robe lies the playground of speculators or the eve of a dawn explosion.
To answer this question, a key consideration for both parties is whether it will improve with each other. Can one benefit from the other's model? In this article, we also attempt to examine this pattern from the shoulders of our predecessors: how Web3 can play a role in various aspects of the AI technology stack, and what new vitality AI can bring to Web3?
Part.1 What opportunities are there for Web3 under the AI stack?
Before diving into this topic, we need to understand the technology stack of AI large models:
In simpler terms, the whole process can be described as follows: a "large model" is like the human brain. In the early stages, this brain belongs to a newborn baby who has just come into the world, needing to observe and absorb vast amounts of information from the surrounding environment to understand it. This is the "collection" stage of data. Since computers do not possess multiple senses like human vision and hearing, large amounts of unlabelled information from the external world need to be transformed into a format that computers can understand and use through "preprocessing" before training.
After inputting the data, the AI builds a model with understanding and predictive capabilities through "training", which can be seen as the process of a baby gradually understanding and learning about the world. The model's parameters are akin to the language skills that a baby adjusts continuously during the learning process. When the content of learning starts to become specialized or when feedback is received through communication with others and corrections are made, it enters the "fine-tuning" stage of the large model.
As children gradually grow up and learn to speak, they can understand meanings and express their feelings and thoughts in new conversations. This stage is similar to the "reasoning" of large AI models, where the model can predict and analyze new language and text inputs. Infants express their feelings, describe objects, and solve various problems through language abilities, which is also akin to how large AI models apply reasoning in various specific tasks after training and being put into use, such as image classification and speech recognition.
The AI Agent is closer to the next form of large models - capable of independently executing tasks and pursuing complex goals, not only possessing the ability to think but also able to remember, plan, and interact with the world using tools.
Currently, in response to the pain points of AI across various stacks, Web3 has preliminarily formed a multi-layered, interconnected ecosystem that encompasses all stages of the AI model process.
1. Basic Layer: The Airbnb of Computing Power and Data
Computing Power
Currently, one of the highest costs of AI is the computational power and energy required for training and inference models.
One example is that Meta's LLAMA3 requires 16,000 H100 GPUs produced by NVIDIA (a top-of-the-line graphics processing unit designed specifically for artificial intelligence and high-performance computing workloads) to complete training in 30 days. The unit price of the latter's 80GB version ranges from $30,000 to $40,000, which necessitates an investment of $400-$700 million in computing hardware (GPU + network chips), while monthly training consumes 1.6 billion kilowatt-hours, leading to energy expenses of nearly $20 million per month.
The release of AI computing power is also one of the earliest intersections between Web3 and AI------DePin (Decentralized Physical Infrastructure Network). Currently, a certain DePin data website has listed over 1,400 projects, among which GPU computing power sharing representative projects include io.net, Aethir, Akash, Render Network, and so on.
The main logic is that the platform allows individuals or entities with idle GPU resources to contribute their computing power in a decentralized manner without permission. By creating an online marketplace similar to Uber or Airbnb for buyers and sellers, the utilization rate of underutilized GPU resources is improved, allowing end users to obtain more cost-effective efficient computing resources. At the same time, the staking mechanism ensures that if there are violations of quality control mechanisms or network interruptions, resource providers will face corresponding penalties.
Its characteristics are:
Gather idle GPU resources: Suppliers mainly include excess computing power resources from third-party independent small and medium-sized data centers, cryptocurrency mining farms, etc., and mining hardware with a PoS consensus mechanism, such as FileCoin and ETH miners. Currently, there are also projects dedicated to launching devices with lower entry barriers, such as exolab, which uses local devices like MacBook, iPhone, and iPad to establish a computing power network for running large model inference.
Facing the long tail market of AI computing power:
a. "From a technical perspective, a decentralized computing power market is more suitable for inference steps. Training relies more on the data processing capabilities brought by large-scale GPU clusters, while inference has relatively lower demands on GPU computing performance, such as Aethir focusing on low-latency rendering work and AI inference applications."
b. "From the demand side perspective," small to medium computing power demanders will not train their own large models separately, but will only choose to optimize and fine-tune around a few leading large models, and these scenarios are naturally suited for distributed idle computing power resources.
Data
Data is the foundation of AI. Without data, computation is as useless as floating weeds. The relationship between data and models is akin to the saying "Garbage in, Garbage out"; the quantity of data and the quality of input determine the final output quality of the model. For the training of current AI models, data determines the model's language ability, understanding ability, and even its values and human-like performance. Currently, the data demand dilemma of AI mainly focuses on the following four aspects:
Data hunger: AI model training relies on a large amount of data input. Public information shows that OpenAI trained GPT-4 with a parameter count reaching trillions.
Data Quality: With the integration of AI and various industries, the timeliness of data, diversity of data, professionalism of vertical data, and the incorporation of emerging data sources such as social media sentiment have raised new requirements for its quality.
Privacy and compliance issues: Currently, various countries and enterprises are gradually recognizing the importance of high-quality datasets and are imposing restrictions on dataset scraping.
High cost of data processing: Large data volume and complex processing. Public information shows that over 30% of the R&D costs of AI companies are used for basic data collection and processing.
Currently, web3 solutions are reflected in the following four aspects:
The vision of Web3 is to allow users who make real contributions to also participate in the value creation brought by data, and to obtain more private and valuable data from users in a low-cost manner through distributed networks and incentive mechanisms.
Grass is a decentralized data layer and network where users can contribute idle bandwidth and relay traffic by running Grass nodes to capture real-time data from across the internet and receive token rewards;
Vana introduces a unique Data Liquidity Pool (DLP) concept, allowing users to upload their private data (such as shopping history, browsing habits, social media activities, etc.) to a specific DLP and flexibly choose whether to authorize specific third parties to use this data;
In PublicAI, users can use #AI 或#Web3 as a classification tag on X and @PublicAI to achieve data collection.
Currently, Grass and OpenLayer are both considering adding data annotation as a key step.
Synesis introduced the concept of "Train2earn", emphasizing data quality, where users can earn rewards by providing labeled data, annotations, or other forms of input.
The data labeling project Sapien gamifies the tagging tasks and allows users to stake points to earn more points.
Current common privacy technologies in Web3 include:
Trusted Execution Environment ( TEE ), such as Super Protocol;
Fully Homomorphic Encryption (FHE), such as BasedAI, Fhenix.io or Inco Network;
Zero-knowledge technology (zk), such as Reclaim Protocol using zkTLS technology, generates zero-knowledge proofs of HTTPS traffic, allowing users to securely import activity, reputation, and identity data from external websites without exposing sensitive information.
However, the field is still in its early stages, and most projects are still in exploration. One current dilemma is that the computing costs are too high, some examples include: