CDFSL-V: Cross-Domain Few-Shot Learning for Videos

University of Central Florida
ICCV 2023

Abstract

Few-shot video action recognition is an effective approach to recognizing new categories with only a few labeled examples, thereby reducing the challenges associated with collecting and annotating large-scale video datasets. Existing methods in video action recognition rely on large labeled datasets from the same domain. However, this setup is not realistic as novel categories may come from different data domains that may have different spatial and temporal characteristics. This dissimilarity between the source and target domains can pose a significant challenge, rendering traditional few-shot action recognition techniques ineffective. To address this issue, in this work, we propose a novel cross-domain few-shot video action recognition method that leverages self-supervised learning and curriculum learning to balance the information from the source and target domains. To be particular, our method employs a masked autoencoder-based self-supervised training objective to learn from both source and target data in a self-supervised manner. Then a progressive curriculum balances learning the discriminative information from the source dataset with the generic information learned from the target domain. Initially, our curriculum utilizes supervised learning to learn class discriminative features from the source data. As the training progresses, we transition to learning target-domain-specific features. We propose a progressive curriculum to encourage the emergence of rich features in the target domain based on class discriminative supervised features in the source domain. We evaluate our method on several challenging benchmark datasets and demonstrate that our approach outperforms existing cross-domain few-shot learning techniques.

Video Presentation

Poster

BibTeX


      @misc{samarasinghe2023cdfslv,
      title={CDFSL-V: Cross-Domain Few-Shot Learning for Videos},
      author={Sarinda Samarasinghe and Mamshad Nayeem Rizve and Navid Kardan and Mubarak Shah},
      year={2023},
      eprint={2309.03989},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}