Background
Fluorescence microscopy is an indispensable tool for observing cellular dynamics and internal structures. To reduce photobleaching and phototoxicity during long-term observation, researchers must lower light exposure, which inevitably leads to a sharp decline in the image signal-to-noise ratio (SNR). Recently, self-supervised deep learning paradigms like Noise2Noise and Noise2Void have provided a breakthrough, requiring no paired clean images for training. The self-inspired Noise2Noise (SN2N) method even requires only a single noisy image, improving photon efficiency by one to two orders of magnitude.
However, existing computational denoising methods face a critical pain point: deceptive background artifacts. Conventional "patch-wise normalization" operations often over-amplify weak random noise in the background, inducing networks to fabricate false structures in signal-free regions. For biological samples with sparse signals, these artifacts can be mistakenly identified as subcellular structures or synaptic connections, severely misleading downstream quantitative analysis.
An Interdisciplinary Convergence
Addressing this challenge requires a deep intersection of optical super-resolution imaging, deep learning algorithms, and cell biology analysis. While optical imaging breaks the diffraction limit and deep learning restores images under low-photon conditions, cell biology imposes strict quantitative standards on the authenticity of the restored results. Guided by this interdisciplinary fusion, the research team from Harbin Institute of Technology and Peking University developed the Adaptive-SN2N framework. By analyzing the inherent statistical properties of biological images, the team successfully identified and resolved the systemic artifacts caused by traditional preprocessing strategies.
Theoretical Breakthroughs and Core Workflow
The team systematically revealed the theoretical root of artifact generation in deep learning denoising. Through mathematical derivation, they demonstrated that for low-signal background patches with a tiny local dynamic range (ΔP), patch-wise normalization drastically amplifies noise variance by a factor of 1/ΔP², artificially stretching imperceptible noise across the entire [0,1] interval. Meanwhile, for sparse strong-signal patches, background pixels are compressed into an extremely narrow near-zero range, causing severe data distribution shifts.
To overcome this, the team developed a complete computational imaging workflow:
(1) Risk-aware adaptive normalization: The system quantitatively evaluates the risk of each image patch across three dimensions: mean, standard deviation, and skewness. For "high-risk" images containing weak backgrounds or sparse bright signals, the algorithm automatically switches to image-wise (global) normalization. Conversely, for "low-risk" dense structures, it retains patch-wise normalization to maximize local contrast.
(2) Self-inspired learning framework: Utilizing a single noisy image, the algorithm generates "twin" image pairs with identical content but independent noise distributions via diagonal spatial resampling and Fourier domain interpolation, enabling self-constrained training without clean references.
(3) Gaussian-weighted overlap inference: To solve boundary discontinuity and stitching artifacts, the team implemented a sliding window strategy with a 50% overlap rate. By applying a 2D Gaussian weighting function, it seamlessly fuses the prediction blocks. Ablation studies quantitatively confirmed that this mechanism brings significant improvements across almost all datasets and metrics.
Multimodal Validation and Biological Reliability
Experimental results demonstrate that Adaptive-SN2N exhibits outstanding performance on both structured illumination microscopy (SIM) and spinning-disk confocal-based SIM (SD-SIM). In highly challenging dual-color live-cell time-series imaging, Adaptive-SN2N prevented large-area background mis-segmentation during Otsu's thresholding of mitochondria. It also improved the skeletonization of the endoplasmic reticulum (ER) from severely fragmented states to a continuous, complete network topology, reliably capturing the spatiotemporal dynamics of mitochondrial fission and fusion events.
Conclusion and Future Prospects
The Adaptive-SN2N framework systematically resolves the critical challenge of background artifacts in self-supervised denoising. By synergistically integrating risk-aware adaptive normalization, self-inspired learning, and Gaussian-weighted overlap inference, this method suppresses artifacts from the source while maintaining a 1-to-2 orders of magnitude improvement in photon efficiency.
In the future, this risk-aware adaptive processing approach is expected to be extended to broader computational microscopy tasks, such as image segmentation and colocalization analysis. For highly demanding tasks like high-density synaptic tracking in neuroscience or identifying tiny metastatic protrusions in cancer research, artifact-free, high-SNR images will break existing analytical limitations and provide truly reliable computational support for life science research.
Imaging analysis
Cells
Artifact-suppressed and adaptive self-inspired learning denoising for super-resolution fluorescence microscopy
31-Mar-2026
The authors declare no competing interests.