Abstract:
Large language models (LLMs) have demonstrated immense potential across industries, driving transformative capabilities into specialized domains through LLM-Agents. However, the development of LLM-Agents is hindered by model silos and data fragmentation due to commercial competition and privacy concerns, which restrict the exchange of model parameters and private datasets. To address this problem, this article proposes factory for Agents (FAgent), a systematic pipeline that integrates diverse models and datasets across stakeholders to generate LLM-Agents. Its core technology is a collaborative learning paradigm among large and small models, leveraging federated learning, transfer learning, knowledge distillation, and reinforcement learning to enable privacy-preserving model-to-model learning, which produces Agents. As model competition and data scarcity intensify, FAgent is envisioned to offer a scalable and practical solution for innovations in finance, healthcare, and beyond.