
Title |
|---|
![]() Does Self-Rationalization Improve Robustness to Spurious Correlations?Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() Modeling Information Change in Science Communication with Semantically
Matched ParaphrasesConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() Retrieval Augmentation for Commonsense Reasoning: A Unified ApproachConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() Lexical Generalization Improves with Larger Models and Longer TrainingConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() DiscoSense: Commonsense Reasoning with Discourse ConnectivesConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 Prajjwal Bhargava Vincent Ng |
![]() PATS: Sensitivity-aware Noisy Learning for Pretrained Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() ADDMU: Detection of Far-Boundary Adversarial Examples with Data and
Model Uncertainty EstimationConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() Meta-learning Pathologies from Radiology Reports using Variance Aware
Prototypical NetworksConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() RF: A General Retrieval, Reading and Fusion Framework for
Document-level Natural Language InferenceConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() What do Large Language Models Learn beyond Language?Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() Clip-Tuning: Towards Derivative-free Prompt Learning with a Mixture of
RewardsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() Spectral ProbingConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() Metric-guided Distillation: Distilling Knowledge from the Metric to
Ranker and Retriever for Generative Commonsense ReasoningConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() Efficiently Tuned Parameters are Task EmbeddingsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() Finding Dataset Shortcuts with Grammar InductionConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() Disentangling Reasoning Capabilities from Language Models with
Compositional Reasoning TransformersAnnual Meeting of the Association for Computational Linguistics (ACL), 2022 |
![]() A Linguistic Investigation of Machine Learning based Contradiction
Detection Models: An Empirical Analysis and Future PerspectivesInternational Conference on Machine Learning and Applications (ICMLA), 2022 |
![]() SafeText: A Benchmark for Exploring Physical Safety in Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() Perceptual Grouping in Contrastive Vision-Language ModelsIEEE International Conference on Computer Vision (ICCV), 2022 |
![]() Retrofitting Multilingual Sentence Embeddings with Abstract Meaning
RepresentationConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() RARR: Researching and Revising What Language Models Say, Using Language
ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2022 |
![]() Zero-Shot Learners for Natural Language Understanding via a Unified
Multiple Choice PerspectiveConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() Knowledge Prompting in Pre-trained Language Model for Natural Language
UnderstandingConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() Sentence Representation Learning with Generative Objective rather than
Contrastive ObjectiveConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() Improving Semantic Matching through Dependency-Enhanced Pre-trained
Model with Adaptive FusionConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() TestAug: A Framework for Augmenting Capability-based NLP TestsInternational Conference on Computational Linguistics (COLING), 2022 |
![]() Holistic Sentence Embeddings for Better Out-of-Distribution DetectionConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() Transparency Helps Reveal When Language Models Learn MeaningTransactions of the Association for Computational Linguistics (TACL), 2022 |
![]() Benchmarking Long-tail Generalization with Likelihood SplitsFindings (Findings), 2022 Ameya Godbole Robin Jia |
![]() OpenCQA: Open-ended Question Answering with ChartsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() Are Sample-Efficient NLP Models More Robust?Annual Meeting of the Association for Computational Linguistics (ACL), 2022 |
![]() Task Compass: Scaling Multi-task Pre-training with Task PrefixConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() Measuring and Improving Semantic Diversity of Dialogue GenerationConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() A Kernel-Based View of Language Model Fine-TuningInternational Conference on Machine Learning (ICML), 2022 |
![]() Model Cascading: Towards Jointly Improving Efficiency and Accuracy of
NLP SystemsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() REV: Information-Theoretic Evaluation of Free-Text RationalesAnnual Meeting of the Association for Computational Linguistics (ACL), 2022 |
![]() CORE: A Retrieve-then-Edit Framework for Counterfactual Data GenerationConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() Uncertainty Quantification with Pre-trained Language Models: A
Large-Scale Empirical AnalysisConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
![]() Language Models Are Poor Learners of Directional InferenceConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |