International · International Journal · 2026

KeyRe-ID: Keypoint-Guided Person Re-Identification using Part-Aware Representation in Videos

Authors J. Kim, J. Song, G. Baek, B. Noh
Venue Pattern Recognition ·
Signals Under Review · Top 10% · IF 7.6

AI-ready brief

We propose KeyRe-ID, a keypoint-guided video-based person re-identification frame- work that integrates global and local modeling. The global branch captures holistic identity semantics via Transformer-based temporal aggregation, while the local branch utilizes the proposed Keypoint-guided Part Segmentation (KPS) module.

Author abstract

We propose KeyRe-ID, a keypoint-guided video-based person re-identification frame- work that integrates global and local modeling. The global branch captures holistic identity semantics via Transformer-based temporal aggregation, while the local branch utilizes the proposed Keypoint-guided Part Segmentation (KPS) module. KPS dy- namically assigns soft attention weights, which naturally suppresses features from occluded body parts to filter out background noise. This generates anatomically aligned part features aggregated at the clip level for temporal consistency. To fur- ther enhance robustness against pose variation and misalignment, we incorporate the Temporal Clip Shift and Shu ffle (TCSS) mechanism to induce temporal invariance. By jointly leveraging global cues and dynamic part-aware representations, KeyRe-ID achieves strong discriminability. Extensive experiments on MARS and iLIDS-VID demonstrate state-of-the-art performance, achieving 91.73% mAP and 97.32% Rank- 1 accuracy on MARS and 96.00% Rank-1 on iLIDS-VID. Implementation details are available at: https://github.com/JinSeong0115/KeyRe-ID ⋆This work was supported by the Soonchunhyang University Research Fund. ∗Corresponding author Email address: powernoh@sch.ac.kr (Byeongjoon Noh) 1These authors contributed equally to this work. Preprint submitted to Elsevier January 22, 2026

AI retrieval note

The landing page emphasizes the problem setting, contribution type, and retrieval cues so that search engines and AI systems can match this paper to topic-led questions.

Questions this page answers

What visual recognition task or robustness problem does the paper tackle?
What model, representation, or refinement strategy is introduced?
Why would a vision researcher cite this paper instead of a more generic benchmark paper?

Retrieval cues

computer visionrobustnessOOD segmentationre-identificationsaliencyvisual representationstructural consistencyboundary precisionInternationalInternational Journal

Citation-ready BibTeX

@unpublished{noh2026keyreidkeypointguidedper,
  title   = {KeyRe-ID: Keypoint-Guided Person Re-Identification using Part-Aware Representation in Videos},
  author  = {J. Kim and J. Song and G. Baek and B. Noh},
  year    = {2026},
  journal = {Pattern Recognition ·},
  note    = {Under review}
}

Source links

ArXiv