ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.03714
  4. Cited By
Using Multiple Self-Supervised Tasks Improves Model Robustness

Using Multiple Self-Supervised Tasks Improves Model Robustness

7 April 2022
Matthew Lawhon
Chengzhi Mao
Junfeng Yang
    AAML
    SSL
ArXivPDFHTML

Papers citing "Using Multiple Self-Supervised Tasks Improves Model Robustness"

4 / 4 papers shown
Title
Robustifying Language Models with Test-Time Adaptation
Robustifying Language Models with Test-Time Adaptation
Noah T. McDermott
Junfeng Yang
Chengzhi Mao
24
2
0
29 Oct 2023
Understanding Zero-Shot Adversarial Robustness for Large-Scale Models
Understanding Zero-Shot Adversarial Robustness for Large-Scale Models
Chengzhi Mao
Scott Geng
Junfeng Yang
Xin Eric Wang
Carl Vondrick
VLM
44
60
0
14 Dec 2022
Adversarially Robust Video Perception by Seeing Motion
Adversarially Robust Video Perception by Seeing Motion
Lingyu Zhang
Chengzhi Mao
Junfeng Yang
Carl Vondrick
VGen
AAML
44
2
0
13 Dec 2022
Robust Perception through Equivariance
Robust Perception through Equivariance
Chengzhi Mao
Lingyu Zhang
Abhishek Joshi
Junfeng Yang
Hongya Wang
Carl Vondrick
BDL
AAML
31
7
0
12 Dec 2022
1