Skip to main content
  • Home
  • ताजा घटनाएं
  • कार्यक्रम
  • Towards Efficient Ensemble Learning Techniques for Semi-Supervised and Universal Domain Adaptation
Towards Efficient Ensemble Learning Techniques for Semi-Supervised and Universal Domain Adaptation

Towards Efficient Ensemble Learning Techniques for Semi-Supervised and Universal Domain Adaptation

Date20th Mar 2024

Time03:30 PM

Venue SSB-233 (ALC)

PAST EVENT

Details

Starting with the introduction of AlexNet, remarkable successes have been observed in various recognition tasks for Computer Vision applications, by utilizing deep learning techniques. These successes rests upon two key assumptions: ample labeled training data and the requirement that the training and test data originate from identical domains. However, real-world applications often involve training and testing data from distinct domains and collecting large labeled data for each domain requires extensive investments in time and resources. A potential solution to address this issue is Domain Adaptation, which mitigates domain shifts by transferring the knowledge from labeled source data (i.e., source domain) to target data (i.e., target domain).

-------------------

In this talk, we will present our work on semi-supervised and universal domain adaptation. The first work focuses on Semi-Supervised Domain Adaptation (SSDA), an extension of Unsupervised Domain Adaptation where only a few labeled samples per class are also available from the target domain during training. We explore the use of modern backbones in SSDA and design an efficient ensemble framework using a novel diversity module and co-training that achieves state-of-the-art results across three benchmark datasets in a relatively shorter time. The second work focuses on Universal Domain Adaptation (UniDA), a challenging scenario of Unsupervised Domain Adaptation with label shift. There exist three primary challenges in UniDA. The first challenge is overfitting from source domain data; we tackle this from a newer perspective by using the stochastic binary network, where we treat classifier weight as a random variable rather than a point weight vector and then use Gaussian distribution to model this. The second challenge is private class detection; for which we propose a novel confidence score estimation technique using consistency between the outputs of sampled stochastic binary networks. The third challenge is fragmented feature distributions in the target domain, and we efficiently solve this by formulating a loss function using a deep discriminative clustering framework. Finally, by combining all these techniques, our framework achieves state-of-the-art results on three benchmark datasets. Notably, in the most challenging VisDA dataset, our method exhibits an 8.5% boost in H-score.

Speakers

SAURABH KUMAR JAIN

Computer Science and Engg