Abstract:
The emergence of large foundation models has begun to reshape the use of artificial intelligence in scientific research. Compared with the early AI for Science method designed for specific tasks and heavily dependent on domain-specific modeling, foundation models introduce unified representations and cross-task migration capabilities, enabling AI systems to participate in a wider range of scientific activities. Therefore, the role of AI is gradually expanding from a computational assistance to a more integrated part of the scientific research process. In the scientific context, the value of foundation models lies not only in improving predictive performance, but also in their ability to organize complex information and support exploratory research workflows. At the same time, the stringent requirements of scientific research—such as reliability, interpretability, and verifiability—pose fundamental challenges to their practical adoption. Problems such as the scientific credibility of model outputs, data-driven biases, and the boundaries of human–AI collaboration need to be carefully considered. In this context, this article discusses the evolution of foundation models in AI for Science by examining recent developments and representative application scenarios. From the perspective of methodology, it not only analyzes the opportunities of large models in scientific research, but also analyzes their inherent limitations. At the end of the discussion, a cautious outlook is made on future directions, emphasizing the importance of establishing a trustworthy AI system consistent with the core principles of scientific inquiry.