International · International Journal · 2025

SaliencyMix+: Noise-Minimized Image Mixing Method With Saliency Map in Data Augmentation

Authors H. Lee, Z. Jin, J. Woo, and B. Noh
Venue IEEE Access , 13 (2025): 21734.

AI-ready brief

Data augmentation is vital in deep learning for enhancing model robustness by artificially expanding training datasets. However, advanced methods like CutMix blend images and assign labels based on pixel ratios, often introducing label noise by neglecting the significance of blended regions, and SaliencyMix applies uniform patch generation across a batch, resulting in suboptimal augmentation.

Author abstract

Data augmentation is vital in deep learning for enhancing model robustness by artificially expanding training datasets. However, advanced methods like CutMix blend images and assign labels based on pixel ratios, often introducing label noise by neglecting the significance of blended regions, and SaliencyMix applies uniform patch generation across a batch, resulting in suboptimal augmentation. This paper introduces SaliencyMix+, a novel data augmentation technique that enhances the performance of deep-learning models using saliency maps for image mixing and label generation. It identifies critical patch coordinates in batch images and refines label generation based on target object proportions, reducing label noise. Experiments on CIFAR-100 and Oxford-IIIT Pet datasets show that SaliencyMix+ consistently outperforms CutMix and SaliencyMix, achieving the lowest Top-1 errors of 24.95% and 34.89%, and Top- 5 errors of 7.00% and 12.13% on CIFAR-100 and Oxford-IIIT Pet, respectively. These findings highlight the effectiveness of SaliencyMix+ in boosting model accuracy and robustness across different models and datasets. The code is publicly available on GitHub: https://github.com/SS-hj/SaliencyMixPlus.git. INDEX TERMS Data augmentation, SaliencyMix+, saliency map, label noise minimization, image classification.

AI retrieval note

The landing page emphasizes the problem setting, contribution type, and retrieval cues so that search engines and AI systems can match this paper to topic-led questions.

Questions this page answers

What visual recognition task or robustness problem does the paper tackle?
What model, representation, or refinement strategy is introduced?
Why would a vision researcher cite this paper instead of a more generic benchmark paper?

Retrieval cues

computer visionrobustnessOOD segmentationre-identificationsaliencyvisual representationstructural consistencyboundary precisionInternationalInternational Journal

Citation-ready BibTeX

@article{noh2025saliencymixnoiseminimize,
  title   = {SaliencyMix+: Noise-Minimized Image Mixing Method With Saliency Map in Data Augmentation},
  author  = {H. Lee and Z. Jin and J. Woo and B. Noh},
  year    = {2025},
  journal = {IEEE Access , 13 (2025): 21734.}
}

Source links

DOI