ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.08778
  4. Cited By
From Tokens to Lattices: Emergent Lattice Structures in Language Models

From Tokens to Lattices: Emergent Lattice Structures in Language Models

4 April 2025
Bo Xiong
Steffen Staab
    LRM
ArXiv (abs)PDFHTML

Papers citing "From Tokens to Lattices: Emergent Lattice Structures in Language Models"

18 / 18 papers shown
Title
Identifying Linear Relational Concepts in Large Language Models
Identifying Linear Relational Concepts in Large Language Models
David Chanin
Anthony Hunter
Oana-Maria Camburu
LLMSVKELM
60
4
0
15 Nov 2023
The Linear Representation Hypothesis and the Geometry of Large Language
  Models
The Linear Representation Hypothesis and the Geometry of Large Language Models
Kiho Park
Yo Joong Choe
Victor Veitch
LLMSVMILM
155
190
0
07 Nov 2023
Do PLMs Know and Understand Ontological Knowledge?
Do PLMs Know and Understand Ontological Knowledge?
Weiqi Wu
Chengyue Jiang
Yong Jiang
Pengjun Xie
Kewei Tu
87
27
0
12 Sep 2023
Scaling up Discovery of Latent Concepts in Deep NLP Models
Scaling up Discovery of Latent Concepts in Deep NLP Models
Majd Hawasly
Fahim Dalvi
Nadir Durrani
90
5
0
20 Aug 2023
A Comprehensive Capability Analysis of GPT-3 and GPT-3.5 Series Models
A Comprehensive Capability Analysis of GPT-3 and GPT-3.5 Series Models
Junjie Ye
Xuanting Chen
Nuo Xu
Can Zu
Zekai Shao
...
Jie Zhou
Siming Chen
Tao Gui
Qi Zhang
Xuanjing Huang
ELM
59
334
0
18 Mar 2023
COPEN: Probing Conceptual Knowledge in Pre-trained Language Models
COPEN: Probing Conceptual Knowledge in Pre-trained Language Models
Hao Peng
Xiaozhi Wang
Shengding Hu
Hailong Jin
Lei Hou
Juanzi Li
Zhiyuan Liu
Qun Liu
81
25
0
08 Nov 2022
Toy Models of Superposition
Toy Models of Superposition
Nelson Elhage
Tristan Hume
Catherine Olsson
Nicholas Schiefer
T. Henighan
...
Sam McCandlish
Jared Kaplan
Dario Amodei
Martin Wattenberg
C. Olah
AAMLMILM
198
380
0
21 Sep 2022
Analyzing Encoded Concepts in Transformer Language Models
Analyzing Encoded Concepts in Transformer Language Models
Hassan Sajjad
Nadir Durrani
Fahim Dalvi
Firoj Alam
A. Khan
Jia Xu
49
47
0
27 Jun 2022
Discovering Latent Concepts Learned in BERT
Discovering Latent Concepts Learned in BERT
Fahim Dalvi
A. Khan
Firoj Alam
Nadir Durrani
Jia Xu
Hassan Sajjad
SSL
50
61
0
15 May 2022
A Review on Language Models as Knowledge Bases
A Review on Language Models as Knowledge Bases
Badr AlKhamissi
Millicent Li
Asli Celikyilmaz
Mona T. Diab
Marjan Ghazvininejad
KELM
91
186
0
12 Apr 2022
Inspecting the concept knowledge graph encoded by modern language models
Inspecting the concept knowledge graph encoded by modern language models
Carlos Aspillaga
Marcelo Mendoza
Alvaro Soto
72
13
0
27 May 2021
On the Inductive Bias of Masked Language Modeling: From Statistical to
  Syntactic Dependencies
On the Inductive Bias of Masked Language Modeling: From Statistical to Syntactic Dependencies
Tianyi Zhang
Tatsunori Hashimoto
AI4CE
66
30
0
12 Apr 2021
Asking without Telling: Exploring Latent Ontologies in Contextual
  Representations
Asking without Telling: Exploring Latent Ontologies in Contextual Representations
Julian Michael
Jan A. Botha
Ian Tenney
49
43
0
29 Apr 2020
How Does BERT Answer Questions? A Layer-Wise Analysis of Transformer
  Representations
How Does BERT Answer Questions? A Layer-Wise Analysis of Transformer Representations
Betty van Aken
B. Winter
Alexander Loser
Felix Alexander Gers
72
154
0
11 Sep 2019
Language Models as Knowledge Bases?
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELMAI4MH
587
2,680
0
03 Sep 2019
BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field
  Language Model
BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field Language Model
Alex Jinpeng Wang
Kyunghyun Cho
VLM
106
358
0
11 Feb 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLMSSLSSeg
1.8K
95,324
0
11 Oct 2018
Learning Concept Hierarchies from Text Corpora using Formal Concept
  Analysis
Learning Concept Hierarchies from Text Corpora using Formal Concept Analysis
Philipp Cimiano
Andreas Hotho
Steffen Staab
116
635
0
09 Sep 2011
1