ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2209.00445
  4. Cited By
Interpreting Embedding Spaces by Conceptualization

Interpreting Embedding Spaces by Conceptualization

22 August 2022
Adi Simhi
Shaul Markovitch
ArXivPDFHTML

Papers citing "Interpreting Embedding Spaces by Conceptualization"

29 / 29 papers shown
Title
SWEA: Updating Factual Knowledge in Large Language Models via Subject Word Embedding Altering
SWEA: Updating Factual Knowledge in Large Language Models via Subject Word Embedding Altering
Xiaopeng Li
Huijun Liu
Shangwen Wang
Bin Ji
Bing Ji
...
Jun Ma
Jie Yu
Xiaodong Liu
Jing Wang
Weimin Zhang
KELM
112
5
0
31 Jan 2024
Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text
  Models
Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models
Jianmo Ni
Gustavo Hernández Ábrego
Noah Constant
Ji Ma
Keith B. Hall
Daniel Cer
Yinfei Yang
204
557
0
19 Aug 2021
Post-hoc Interpretability for Neural NLP: A Survey
Post-hoc Interpretability for Neural NLP: A Survey
Andreas Madsen
Siva Reddy
A. Chandar
XAI
69
231
0
10 Aug 2021
Transformer visualization via dictionary learning: contextualized
  embedding as a linear superposition of transformer factors
Transformer visualization via dictionary learning: contextualized embedding as a linear superposition of transformer factors
Zeyu Yun
Yubei Chen
Bruno A. Olshausen
Yann LeCun
56
77
0
29 Mar 2021
Polyjuice: Generating Counterfactuals for Explaining, Evaluating, and
  Improving Models
Polyjuice: Generating Counterfactuals for Explaining, Evaluating, and Improving Models
Tongshuang Wu
Marco Tulio Ribeiro
Jeffrey Heer
Daniel S. Weld
94
249
0
01 Jan 2021
Explaining NLP Models via Minimal Contrastive Editing (MiCE)
Explaining NLP Models via Minimal Contrastive Editing (MiCE)
Alexis Ross
Ana Marasović
Matthew E. Peters
66
122
0
27 Dec 2020
A Survey on the Explainability of Supervised Machine Learning
A Survey on the Explainability of Supervised Machine Learning
Nadia Burkart
Marco F. Huber
FaML
XAI
51
773
0
16 Nov 2020
Explaining and Improving Model Behavior with k Nearest Neighbor
  Representations
Explaining and Improving Model Behavior with k Nearest Neighbor Representations
Nazneen Rajani
Ben Krause
Wengpeng Yin
Tong Niu
R. Socher
Caiming Xiong
FAtt
45
33
0
18 Oct 2020
The POLAR Framework: Polar Opposites Enable Interpretability of
  Pre-Trained Word Embeddings
The POLAR Framework: Polar Opposites Enable Interpretability of Pre-Trained Word Embeddings
Binny Mathew
Sandipan Sikdar
Florian Lemmerich
M. Strohmaier
29
36
0
27 Jan 2020
Exploring the Limits of Transfer Learning with a Unified Text-to-Text
  Transformer
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel
Noam M. Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
AIMat
419
20,127
0
23 Oct 2019
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan O. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
228
305
0
17 Oct 2019
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Nils Reimers
Iryna Gurevych
1.3K
12,193
0
27 Aug 2019
RoBERTa: A Robustly Optimized BERT Pretraining Approach
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
621
24,431
0
26 Jul 2019
What Does BERT Look At? An Analysis of BERT's Attention
What Does BERT Look At? An Analysis of BERT's Attention
Kevin Clark
Urvashi Khandelwal
Omer Levy
Christopher D. Manning
MILM
215
1,594
0
11 Jun 2019
Parallax: Visualizing and Understanding the Semantics of Embedding
  Spaces via Algebraic Formulae
Parallax: Visualizing and Understanding the Semantics of Embedding Spaces via Algebraic Formulae
Piero Molino
Yang Wang
Jiawei Zhang
39
9
0
28 May 2019
BERT Rediscovers the Classical NLP Pipeline
BERT Rediscovers the Classical NLP Pipeline
Ian Tenney
Dipanjan Das
Ellie Pavlick
MILM
SSeg
135
1,471
0
15 May 2019
Analytical Methods for Interpretable Ultradense Word Embeddings
Analytical Methods for Interpretable Ultradense Word Embeddings
Philipp Dufter
Hinrich Schütze
51
25
0
18 Apr 2019
What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in
  Deep NLP Models
What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models
Fahim Dalvi
Nadir Durrani
Hassan Sajjad
Yonatan Belinkov
A. Bau
James R. Glass
MILM
61
190
0
21 Dec 2018
Identifying and Controlling Important Neurons in Neural Machine
  Translation
Identifying and Controlling Important Neurons in Neural Machine Translation
A. Bau
Yonatan Belinkov
Hassan Sajjad
Nadir Durrani
Fahim Dalvi
James R. Glass
MILM
64
184
0
03 Nov 2018
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
1.7K
94,770
0
11 Oct 2018
Firearms and Tigers are Dangerous, Kitchen Knives and Zebras are Not:
  Testing whether Word Embeddings Can Tell
Firearms and Tigers are Dangerous, Kitchen Knives and Zebras are Not: Testing whether Word Embeddings Can Tell
Pia Sommerauer
Antske Fokkens
45
29
0
05 Sep 2018
Imparting Interpretability to Word Embeddings while Preserving Semantic
  Structure
Imparting Interpretability to Word Embeddings while Preserving Semantic Structure
Lutfi Kerem Senel
Ihsan Utlu
Furkan Şahinuç
H. Ozaktas
Aykut Kocc
58
14
0
19 Jul 2018
SPINE: SParse Interpretable Neural Embeddings
SPINE: SParse Interpretable Neural Embeddings
Anant Subramanian
Danish Pruthi
Harsh Jhamtani
Taylor Berg-Kirkpatrick
Eduard H. Hovy
37
131
0
23 Nov 2017
Semantic Structure and Interpretability of Word Embeddings
Semantic Structure and Interpretability of Word Embeddings
Lutfi Kerem Senel
Ihsan Utlu
Veysel Yücesoy
Aykut Koç
Tolga Çukur
55
105
0
01 Nov 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
21,906
0
22 May 2017
Model-Agnostic Interpretability of Machine Learning
Model-Agnostic Interpretability of Machine Learning
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
86
838
0
16 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.2K
16,976
0
16 Feb 2016
Linear Algebraic Structure of Word Senses, with Applications to Polysemy
Linear Algebraic Structure of Word Senses, with Applications to Polysemy
Sanjeev Arora
Yuanzhi Li
Yingyu Liang
Tengyu Ma
Andrej Risteski
75
282
0
14 Jan 2016
Character-level Convolutional Networks for Text Classification
Character-level Convolutional Networks for Text Classification
Xiang Zhang
Jiaqi Zhao
Yann LeCun
266
6,107
0
04 Sep 2015
1