The AI Agent framework, as a key component in the industry’s development, may harbor dual potential in driving technological implementation and ecosystem maturity. The most-discussed frameworks in the market include Eliza, Rig, Swarms, ZerePy, among others. These frameworks attract developers through their GitHub repositories, building their reputation in the process. By issuing tokens in the form of a “library,” these frameworks exhibit characteristics that mirror both waves and particles, akin to quantum physics. Similarly, the Agent frameworks embody a dual nature: they possess the serious externalities typically seen in traditional blockchain projects, while also exhibiting characteristics similar to Memecoins. This article will focus on analyzing the “wave-particle duality” of these frameworks and explore why the Agent framework could be the final missing piece in the ecosystem puzzle.
The Externalities Brought by the Agent Framework: Can They Leave Budding Signs of Spring After the Bubble Deflates?
Since the emergence of GOAT, the narrative around AI Agents has gained increasing momentum in the market, much like a martial arts master wielding a left punch of “Memecoin” and a right palm of “Industry Hope,” where you are bound to lose to one of these strikes. In reality, the application scenarios for AI Agents are not strictly delineated, and the boundaries between platforms, frameworks, and specific applications remain ambiguous. However, based on the preferences of tokens or protocols, they can still be roughly categorized. Below are the main categories based on token or protocol development preferences:
- Launchpads: Platforms for asset issuance. For example, the Virtuals Protocol and Clanker on the Base chain, and Dasha on the Solana chain.
- AI Agent Applications: Positioned between Agent and Memecoin, these applications excel in memory configuration, such as GOAT, aixbt, and others. Typically, these applications are unidirectional in their outputs, with very limited input conditions.
- AI Agent Engines: These include Griffain on the Solana chain and Spectre AI on the Base chain. Griffain evolves from a read/write mode to a read, write, and action mode, while Spectre AI is a RAG engine for on-chain search.
- AI Agent Frameworks: For framework platforms, the Agent itself is an asset. Therefore, the Agent framework serves as the asset issuance platform for Agents — essentially a Launchpad for Agents. Prominent projects in this category include ai16, Zerebro, ARC, and the recently discussed Swarms.
- Other Niches: General-purpose Agent Simmi; the AgentFi protocol Mode; falsification-oriented Agent Seraph; and real-time API Agent Creator.Bid.
Further discussion of the Agent framework reveals its significant externalities. Unlike developers on major public chains and protocols, who are limited to selecting among different programming language environments, the overall developer base in the industry has not shown a corresponding growth in market capitalization. GitHub repositories are where both Web2 and Web3 developers build consensus. Here, establishing developer communities has a more substantial impact and attraction for Web2 developers than the “plug-and-play” packages typically developed by individual protocols.
The four frameworks mentioned in this article are all open-source: ai16z’s Eliza framework has received 6,200 stars; Zerebro’s ZerePy framework has 191 stars; ARC’s RIG framework has 1,700 stars; and Swarms’ Swarms framework has 2,100 stars. Currently, the Eliza framework is widely used in various AI Agent applications, making it the most widely adopted. ZerePy’s development is still in its early stages, primarily focused on X, and does not yet support native LLM or integrated memory. RIG presents the highest relative development difficulty but offers the greatest freedom in performance optimization for developers. Swarms, aside from the team’s launch of mcs, has few other use cases but holds great potential, as it can integrate different frameworks.
Additionally, in the above classification, separating the Agent engine and framework may cause some confusion, but I believe there is a distinction. First, why is it called an engine? The comparison with real-world search engines is quite fitting. Unlike the more homogenous Agent applications, the performance of an Agent engine operates above them but is completely encapsulated, adjusted via API interfaces like a black box. Users can fork the engine to experience its performance but cannot gain the same level of overview and customization freedom as they would with a base framework. Each user’s engine essentially generates a mirror image of a well-tuned Agent, interacting with that mirror. A framework, on the other hand, is designed to adapt to the blockchain, as the ultimate goal of any Agent framework is integration with the respective chain. How data interaction is defined, how data validation is handled, how block sizes are determined, and how consensus and performance are balanced are all factors that the framework must consider. The engine, in contrast, only needs to finely tune the model and set the relationship between data interaction and memory for one specific direction, with performance being the sole evaluation metric, whereas the framework has a broader range of considerations.
Viewing the Agent Framework from the “Wave-Particle Duality” Perspective: A Prerequisite for Ensuring the Right Direction
The lifecycle of an Agent’s input-output process involves three key components. First, the underlying model determines the depth and manner of its thinking. Next, the memory is customizable, and once the base model has produced an output, it is modified based on the memory. Finally, the output operation is completed on different clients.
To validate the “wave-particle duality” of the Agent framework, the “wave” aspect reflects the characteristics of “Memecoin,” symbolizing community culture and developer activity, emphasizing the attractiveness and viral spread of the Agent. The “particle” aspect represents the characteristics of “industry expectations,” indicating the underlying performance, real-world use cases, and technical depth. I will elaborate on this from two perspectives, using the development tutorials of three frameworks as examples.
The Rapid Assembly Eliza Framework
- Setting Up the Environment
2. Installing Eliza
3. Configuration Files
4. Setting the Agent’s Personality
The Eliza framework is relatively easy to get started with. It is built on TypeScript, a language that most Web and Web3 developers are familiar with. The framework is straightforward and avoids unnecessary abstraction, allowing developers to easily add the features they want. As seen in step 3, Eliza supports multi-client integration, and can be understood as an assembler for multi-client setups. Eliza supports platforms like DC, TG, and X, and integrates various large language models (LLMs) to input via social media platforms and output via LLMs. It also supports built-in memory management, enabling developers accustomed to specific habits to quickly deploy AI Agents.
Thanks to the framework’s simplicity and rich set of interfaces, Eliza significantly lowers the entry barrier and provides a relatively unified interface standard.
One-Click Setup ZerePy Framework
- Forking the ZerePy Library
2. Configuring X and GPT
3. Setting the Agent’s Personality
Performance Optimization Rig Framework
For building a RAG (Retrieval-Augmented Generation) Agent, for example:
- Configuring Environment and OpenAI Key
2. Setting Up the OpenAI Client and Using Chunking for PDF Processing
3. Setting Document Structure and Embeddings
4. Creating Vector Storage and RAG Agent
Rig (ARC) is an AI system construction framework based on Rust, designed for optimizing the underlying performance of Large Language Model (LLM) workflows. In other words, ARC functions as an “AI engine toolkit” that provides backend support services such as AI invocation, performance optimization, data storage, and exception handling.
Rig addresses the “invocation” challenge by helping developers make better LLM choices, optimize prompt engineering, effectively manage tokens, handle concurrency, manage resources, and reduce latency. Its focus is on how to “optimize the use” of LLMs within the context of AI Agent systems.
Rig is an open-source Rust library aimed at simplifying the development of LLM-driven applications (including RAG Agents). Due to its deeper level of openness, it places higher demands on developers, requiring a better understanding of Rust and Agent systems. The tutorial presented here outlines the basic configuration for a RAG Agent, which enhances LLMs by integrating them with external knowledge retrieval.
Other demos on the official website show that Rig features:
- Unified LLM Interface: Supports consistent API standards across different LLM providers, simplifying integration.
- Abstracted Workflows: Pre-built modular components allow Rig to handle the design of complex AI systems.
- Integrated Vector Storage: Built-in support for custom storage systems, offering efficient performance for search-based agents like RAG.
- Flexible Embedding: Provides easy-to-use APIs for handling embeddings, reducing the complexity of semantic understanding in search-based agents like RAG.
Compared to Eliza, Rig provides developers with additional space for performance optimization, enabling better tuning of LLM and Agent interactions. Rig leverages the performance advantages of Rust, with zero-cost abstractions, memory safety, and high-performance, low-latency LLM operations, thus offering greater freedom at the foundational level.
Decomposable and Combinable Swarms Framework
Swarms aims to provide an enterprise-level, production-grade multi-Agent orchestration framework. The official website offers dozens of workflows and parallel/serial agent architectures. Below, we highlight a small selection of these.
Sequential Workflow
The Sequential Swarm architecture processes tasks in a linear order. Each Agent completes its task before passing the result to the next Agent in the chain. This structure ensures orderly processing and is especially useful in cases where tasks have dependencies.
Use Cases:
- Tasks in a workflow where each step depends on the previous one, such as assembly lines or sequential data processing.
- Scenarios where strict operation order is required.
Hierarchical Architecture:
This architecture provides top-down control, with higher-level Agents coordinating tasks among lower-level ones. Agents execute tasks concurrently, and their results are fed back into the loop for final aggregation. This structure is highly effective for tasks that are amenable to parallel processing.
Spreadsheet Architecture:
This architecture is designed for managing large-scale groups of Agents working concurrently. It can handle thousands of Agents, each running in its own thread. It’s ideal for supervising the output of large numbers of agents.
Swarms is not just an Agent framework; it is also compatible with the aforementioned Eliza, ZerePy, and Rig frameworks. By applying modularity, Swarms maximizes Agent performance across various workflows and architectures to address specific challenges. The concept and developer community progress behind Swarms are solid.
- Eliza: The most user-friendly, suitable for beginners and rapid prototyping, especially for AI interactions on social media platforms. The framework is simple, allowing for quick integration and modifications, making it ideal for scenarios that do not require extensive performance optimization.
- ZerePy: One-click deployment, ideal for quickly developing AI agent applications for Web3 and social platforms. Suitable for lightweight AI applications, the framework is simple, with flexible configurations, making it great for rapid setup and iteration.
- Rig: Focuses on performance optimization, excelling in high-concurrency and high-performance tasks. It is suited for developers who need fine control and optimization. The framework is more complex, requiring knowledge of Rust, making it suitable for more experienced developers.
- Swarms: Ideal for enterprise-level applications, supporting multi-agent collaboration and complex task management. The framework is flexible, supporting large-scale parallel processing and offering various architecture configurations. However, due to its complexity, it may require a stronger technical background to use effectively.
Overall, Eliza and ZerePy excel in ease of use and rapid development, while Rig and Swarms are more suited for professional developers or enterprise applications requiring high performance and large-scale processing.
This is why Agent frameworks are considered to have “industry hope” characteristics. The frameworks mentioned above are still in their early stages, and the priority should be on securing a first-mover advantage and building an active developer community. The framework’s performance and its relative position compared to popular Web2 applications are not the primary concerns. The key to success lies in the framework’s ability to attract developers; without an easy-to-use platform, even the most powerful frameworks can become obsolete. Once a framework succeeds in attracting developers, those with more mature and comprehensive token economic models will emerge as the winners.
The “Memecoin” characteristic of Agent frameworks is easy to understand. The aforementioned frameworks lack a reasonable tokenomics design. Tokens often have no use case, or their use cases are too narrow. Without proven business models or effective token flywheels, these frameworks remain as frameworks alone, disconnected from their tokens. As a result, the price increase of the tokens depends largely on FOMO (Fear of Missing Out) rather than a solid fundamental support. There is no real moat to ensure stable and sustainable value growth. Additionally, the frameworks themselves appear somewhat crude, with their actual value not matching their current market cap, hence exhibiting the strong “Memecoin” characteristics.
It is important to note that the “wave-particle duality” of Agent frameworks is not a flaw. It should not be hastily interpreted as a half-baked concept that is neither a pure Memecoin nor lacks token use cases. As I mentioned in a previous article, lightweight Agent frameworks are draped in the ambiguous guise of Memecoins. Community culture and fundamentals will no longer be contradictions. A new asset development path is gradually emerging. Although the early stages of Agent frameworks involve some level of speculation and uncertainty, their potential to attract developers and drive real-world applications cannot be ignored. In the future, frameworks with well-developed tokenomics and robust developer ecosystems will likely become the key pillars of this sector.
About BlockBooster:
BlockBooster is an Asian Web3 venture studio backed by OKX Ventures and other leading organizations, aiming to be the trusted teammate of promising builders. We bridge Web3 projects and the real world through strategic investment and deep incubation.
Disclaimer:
This article/blog is provided for informational purposes only. It represents the views of the author(s) and it does not represent the views of BlockBooster. It is not intended to provide (i) investment advice or an investment recommendation; (ii) an offer or solicitation to buy, sell, or hold digital assets, or (iii) financial, accounting, legal, or tax advice. Digital asset holdings, including stablecoins and NFTs, involve a high degree of risk, can fluctuate greatly, and can even become worthless. You should carefully consider whether trading or holding digital assets is suitable for you in light of your financial condition. Please consult your legal/tax/investment professional for questions about your specific circumstances. Information (including market data and statistical information, if any) appearing in this post is for general information purposes only. While all reasonable care has been taken in preparing this data and graphs, no responsibility or liability is accepted for any errors of fact or omission expressed herei