Researchers present a comprehensive review of frontier AI applications in computational structural analysis from 2020 to 2025, focusing on graph neural networks (GNNs), sequence-to-sequence (Seq2Seq) and Transformer-based architectures, and physics-informed methods. Published in Smart Construction , their work offers a valuable guide for researchers and engineers to understand fundamental concepts, current research status, existing challenges, and future application prospects.
Design optimization, construction control, and long-term operation and maintenance of civil engineering structures all rely heavily on computational analysis techniques. Traditional analysis in engineering primarily depends on numerical approaches represented by finite element (FE) methods, which can be time-consuming and computationally expensive. Recent advances in frontier AI techniques have shown promising potential to overcome these limitations. To grasp the research trend, Associate Professor Linghan Song from Fuzhou University, Professor Jiansheng Fan and Associate Professor Chen Wang from Tsinghua University have joined forces to provide this comprehensive review. Their team is at the forefront of smart engineering, specializing in intelligent computational simulation and optimization.
“The computational bottleneck motivates the exploration of AI techniques to complement or even replace certain stages of engineering workflows with higher efficiency in the past decade,” explains Professor Jiansheng Fan. “There remains a pressing need for synthesizing the applications of state-of-the-art AI techniques in structural computational analysis, emphasizing their methodologies, advantages, and inherent limitations relative to traditional machine learning or deep learning methods.”
Their review covers trending AI techniques including GNNs, Seq2Seq\Transformer-based architectures, physics-informed methods, generative models, etc. “Standard AI often struggles with the complex shapes of buildings. Graph learning, however, could capture the topological information of the structural systems and their irregular meshes more efficiently.” Associate Professor Linghan Song explains why they placed a special spotlight on GNNs. The basic concepts and model variants are elaborated, followed by discussions on various graph representation methods, namely, how physical structures can be effectively mapped into the digital world as graph data.
Transformers, widely recognized as the backbone of Large Language Models like GPT, are also emerging as potential tools for structural dynamics. “Technically speaking, Transformers inherit the architecture of Seq2Seq models. This model family deals with sequential data and is capable of simulating complex engineering behaviors such as long-range dependencies, which are defined as dynamic feature learning.” Chen Wang, a key team member who has successfully developed related algorithms for seismic analysis of building structures, offers deep insights into the current research status.
A major challenge for widespread applications of these frontier AI techniques is their physical interpretability. "Engineers often know how to use AI models, but they aren't always sure why they work," explains Chen Wang. "In structural engineering, this 'black box' nature can pose risks to safety and reliability." To bridge this gap, the review shines a spotlight on the rapid evolution of Physics-informed methods. These cutting-edge approaches aim to make AI 'understand' the laws of physics. Current strategies include integrating governing equations directly into the model's loss function, employing energy-based soft constraints, or hard-coding prior mechanical knowledge into the network architecture itself.
The review identifies key bottlenecks in current AI models for structural engineering, such as the over-smoothing of GNNs, the high computational costs of Transformers, and the boundary sensitivity of physics-informed methods. Critical gaps in existing research are summarized, including: (a) a lack of specialized open-source datasets and benchmarks; (b) the trade-off between empirical data and physical information; and (c) significant hurdles regarding the scalability and generalization of these models in engineering applications.
Beyond these models and algorithms, the review explores several rising frontiers in engineering computation.
"We are moving from task-specific tools toward multimodal, pre-trained systems that can 'understand' structural systems across their entire lifecycle," the authors conclude.
This paper ” Frontier AI in computational civil engineering: a review of graph, sequence, physics-informed deep learning, and beyond (2020–2025)” was published in Smart Construction (ISSN: 2960-2033), a peer-reviewed open access journal dedicated to original research articles, communications, reviews, perspectives, reports, and commentaries across all areas of intelligent construction, operation, and maintenance, covering both fundamental research and engineering applications.
Song L, Fan J, Zeng S, Wang C. Frontier AI in computational civil engineering: a review of graph, sequence, physics-informed deep learning, and beyond (2020–2025). Smart Constr. 2026(1):0001, https://doi.org/10.55092/sc20260001.
Smart Construction
Literature review
Not applicable
Frontier AI in computational civil engineering: a review of graph, sequence, physics-informed deep learning, and beyond (2020–2025)
26-Jan-2026