AI Seminar: In-N-Out: Robustness to In-Domain Noise and Out-of-Domain Generalization

  • Starts: 1:00 pm on Wednesday, September 18, 2024
  • Ends: 2:00 pm on Wednesday, September 18, 2024

AI Seminar: In-N-Out: Robustness to In-Domain Noise and Out-of-Domain Generalization

Speaker: Siqi Wang Title: In-N-Out: Robustness to In-Domain Noise and Out-of-Domain Generalization Abstract: The complexities of real-world data, often noisy and diverse, pose significant challenges for model training. Learning with noisy labels (LNL) seeks to enhance model robustness to in-distribution noise, while domain generalization (DG) aims to ensure models perform well across varied domains. However, these fields have traditionally operated independently, each assuming ideal conditions in the other. Our work, In-N-Out, bridges this gap by striving to train models that can effectively handle both in-domain noise and out-of-domain generalization. We find that combining these two tasks poses new challenges not present in either task when addressed individually, thus, requires direct study. In particular, our benchmark uses three real-world datasets and one synthesized noisy dataset, where an evaluation of a range of older and state-of-the-art (SOTA) methods from LNL and DG, as well as their combination, reveals unexpected outcomes. In particular, the best method for each setting varies, with older methods often beating the SOTA. Challenges arise from unbalanced noise sources and domain-specific sensitivities, which pose difficulties for traditional LNL sample selection strategies. However, LNL regularizers show promise when combined with DG methods. Bio: Siqi is a fifth year PhD student at the image and video computing (IVC) group advised by Professor Bryan A. Plummer. Her research focuses on improving noise detection and enhancing generalization performance.
Location:
CDS 701

Information For...