Bluesky Facebook Reddit Email

Teaching AI to spot concrete cracks without heavy labeling

04.07.26 | Maximum Academic Press

Anker Laptop Power Bank 25,000mAh (Triple 100W USB-C)

Anker Laptop Power Bank 25,000mAh (Triple 100W USB-C) keeps Macs, tablets, and meters powered during extended observing runs and remote surveys.


A new study shows that self-supervised artificial intelligence may offer a more practical path for detecting concrete cracks in real-world structures. Instead of depending heavily on large, carefully labeled image datasets, the researchers used DinoV2 to learn rich visual features from images and paired it with a lightweight linear classifier for crack recognition. The system performed strongly across multiple public datasets and showed particular advantages in noisy scenes, varied material textures, and imbalanced data conditions. The findings suggest that self-supervised vision models could help make structural inspection faster, more reliable, and less dependent on costly manual data annotation.

Crack detection is central to structural health monitoring because missed damage can threaten the safety and lifespan of bridges, buildings, and other infrastructure. Traditional manual inspection is slow, labor-intensive, and vulnerable to human error, while many deep learning approaches require large volumes of labeled data and often struggle to generalize when cracks appear under unfamiliar conditions, such as different surface textures, lighting, or background noise. Class imbalance is another major problem, since non-crack regions usually far outnumber actual cracks. Based on these challenges, there is a strong need for deeper research on crack detection methods that can remain accurate, robust, and adaptable across diverse real-world datasets.

Researchers from the University of Technology Sydney, the American University of Beirut, the Chinese Academy of Sciences, and Western Sydney University reported (DOI: 10.1007/s11633-025-1553-5) in February 2026 in Machine Intelligence Research that a self-supervised DinoV2-based framework can detect concrete cracks with strong accuracy and cross-dataset generalization, outperforming several widely used supervised deep learning models in challenging inspection scenarios.

The team evaluated four public crack image datasets: CCiC, Xu, HBC2019, and SDNET2018, covering different materials, backgrounds, and degrees of class imbalance. Their framework resized images to 224 × 224, extracted visual representations with the pre-trained DinoV2_vits14 model, and passed the features into a two-layer linear classification head. DinoV2 was trained for only five epochs, while five supervised baselines-ResNet50, ResNet101, VGG16, MobileNetV2, DenseNet121 and one self-supervised baseline MoCo v2—were trained from scratch under standardized settings for comparison. On same-dataset testing, DinoV2 delivered the best results on the Xu, HBC2019, and SDNET2018 datasets, including perfect recall on Xu, an F1-score of 0.9346 and accuracy of 0.9731 on HBC2019, and the highest accuracy of 0.9416 on SDNET2018. In cross-dataset tests, DinoV2 also remained consistently strong, often leading in accuracy and F1-score when models were trained on one dataset and tested on others. These results suggest that DinoV2 captures more transferable crack features than conventional supervised models, especially when facing noisy backgrounds and previously unseen data.

The study positions self-supervised learning as more than a technical trend: it may solve one of structural monitoring’s most stubborn problems, namely the shortage of broadly representative labeled data. The authors argue that DinoV2’s strength lies in its ability to learn general image features before task-specific classification, allowing it to remain sensitive to crack patterns even when data are complex, noisy, or imbalanced. In safety-critical inspection, that matters because missing a crack can have far greater consequences than a false alarm.

The implications extend beyond benchmark performance. A crack detection system that needs less manual labeling and generalizes better across materials and environments could support more scalable inspection of bridges, pavements, walls, and aging buildings. Such tools may help shift structural monitoring from labor-heavy visual checks toward faster, more autonomous workflows. The results also point to a broader opportunity: self-supervised vision models may become valuable feature engines for engineering diagnostics in settings where labeled data are scarce but reliability is essential. That could make future infrastructure assessment not only more efficient, but also more resilient and deployable in the field.

###

References

DOI

10.1007/s11633-025-1553-5

Original Source URL

https://doi.org/10.1007/s11633-025-1553-5

Funding Information

Open Access funding enabled and organized by CAUL and its Member Institutions.

About Machine Intelligence Research

Machine Intelligence Research (original title: International Journal of Automation and Computing) is published by Springer and sponsored by the Institute of Automation, Chinese Academy of Sciences. The journal publishes high-quality papers on original theoretical and experimental research, targets special issues on emerging topics, and strives to bridge the gap between theoretical research and practical applications.

Machine Intelligence Research

Not applicable

Autonomous Detection of Concrete Cracks Using Self-supervised DinoV2

2-Feb-2026

The authors declare that they have no competing interests.

Keywords

Article Information

Contact Information

Licheng Ou
Machine Intelligence Research
mir@ia.ac.cn

Source

How to Cite This Article

APA:
Maximum Academic Press. (2026, April 7). Teaching AI to spot concrete cracks without heavy labeling. Brightsurf News. https://www.brightsurf.com/news/80EO90Q8/teaching-ai-to-spot-concrete-cracks-without-heavy-labeling.html
MLA:
"Teaching AI to spot concrete cracks without heavy labeling." Brightsurf News, Apr. 7 2026, https://www.brightsurf.com/news/80EO90Q8/teaching-ai-to-spot-concrete-cracks-without-heavy-labeling.html.