Abstract:
Swarm intelligence systems and AI Agent systems are rapidly moving into real-world deployments, showing great promise in domains such as emergency response, traffic management, warehousing and logistics, industrial manufacturing, and operational security. However, security and privacy risks are escalating across layers: physical interference, communication tampering, and application-level attacks targeting models, data, and decision processes. Under a unified three-layer (physical-communication-application) framework, this article systematically catalogs the security and privacy threats facing both classes of systems, summarizes their commonalities and differences, and surveys targeted countermeasures, including access control, neighborhood filtering, blockchain-based mechanisms, reinforcement-learning driven intrusion detection, differential privacy, homomorphic encryption, and federated learning. It further distills transferable defensive patterns and discusses cross-cutting challenges, including security governance, trade-offs among real-time requirements and system performance, and emerging risks in the large-model era (e.g., jailbreaks, prompt injection, tool misuse, and hallucination attacks). The goal of this work is to provide an engineering-oriented, systematic reference for building secure, robust, and trustworthy swarm intelligence and AI agent systems.