From Tokens to Lattices: Emergent Lattice Structures in Language ModelsInternational Conference on Learning Representations (ICLR), 2025 |
The representation landscape of few-shot learning and fine-tuning in
large language modelsNeural Information Processing Systems (NeurIPS), 2024 |
Exploring Alignment in Shared Cross-lingual SpacesAnnual Meeting of the Association for Computational Linguistics (ACL), 2024 |
Scaling up Discovery of Latent Concepts in Deep NLP ModelsConference of the European Chapter of the Association for Computational Linguistics (EACL), 2023 |
Can LLMs facilitate interpretation of pre-trained language models?Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023 |
Probing Graph RepresentationsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2023 |
NxPlain: Web-based Tool for Discovery of Latent ConceptsConference of the European Chapter of the Association for Computational Linguistics (EACL), 2023 |
COPEN: Probing Conceptual Knowledge in Pre-trained Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
On the Transformation of Latent Space in Fine-Tuned NLP ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022 |
Analyzing Encoded Concepts in Transformer Language ModelsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2022 |
Discovering Latent Concepts Learned in BERTInternational Conference on Learning Representations (ICLR), 2022 |
Unsupervised Slot Schema Induction for Task-oriented DialogNorth American Chapter of the Association for Computational Linguistics (NAACL), 2022 |
Towards Understanding Large-Scale Discourse Structures in Pre-Trained
and Fine-Tuned Language ModelsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2022 |
On the Importance of Data Size in Probing Fine-tuned ModelsFindings (Findings), 2022 |
Putting Words in BERT's Mouth: Navigating Contextualized Vector Spaces
with PseudowordsConference on Empirical Methods in Natural Language Processing (EMNLP), 2021 |
How Does Fine-tuning Affect the Geometry of Embedding Space: A Case
Study on IsotropyConference on Empirical Methods in Natural Language Processing (EMNLP), 2021 |
Neuron-level Interpretation of Deep NLP Models: A SurveyTransactions of the Association for Computational Linguistics (TACL), 2021 |
A Cluster-based Approach for Improving Isotropy in Contextual Embedding
SpaceAnnual Meeting of the Association for Computational Linguistics (ACL), 2021 |
Fine-grained Interpretation and Causation Analysis in Deep NLP ModelsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2021 |
DirectProbe: Studying Representations without ClassifiersNorth American Chapter of the Association for Computational Linguistics (NAACL), 2021 |
Discourse Probing of Pretrained Language ModelsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2021 |
The Rediscovery Hypothesis: Language Models Need to Meet LinguisticsJournal of Artificial Intelligence Research (JAIR), 2021 |
RuSentEval: Linguistic Source, Encoder Force!Workshop on Balto-Slavic Natural Language Processing (BSNLP), 2021 |
Probing Classifiers: Promises, Shortcomings, and AdvancesInternational Conference on Computational Logic (ICCL), 2021 |
How Far Does BERT Look At:Distance-based Clustering and Analysis of
BERTs AttentionInternational Conference on Computational Linguistics (COLING), 2020 |