Abstract:
The exponential growth in computational demands driven by emerging applications such as artificial intelligence (AI) and big data has exposed the increasing limitations of the traditional von Neumann architecture, particularly the “memory wall” and “power wall” bottlenecks. Compute-in-memory (CIM) technology merges storage and computation by performing operations directly within memory arrays, thereby reducing data movement energy and improving computational efficiency. From the perspective of computation signal types, CIM can be categorized into two major directions: analog CIM and digital CIM, which target different application scenarios, namely high-throughput, low-precision computation and high-precision, general-purpose computation. This article systematically reviews the evolution, representative implementations, and key challenges in this field, highlighting CIM’s tremendous potential in architectural innovation and energy-efficiency optimization. The authors argue that CIM is not merely a technological evolution but a paradigm shift in computing, with broad prospects in domains such as AI accelerators and edge computing.