ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.02013
  4. Cited By
Layer by Layer: Uncovering Hidden Representations in Language Models

Layer by Layer: Uncovering Hidden Representations in Language Models

4 February 2025
Oscar Skean
Md Rifat Arefin
Dan Zhao
Niket Patel
Jalal Naghiyev
Yann LeCun
Ravid Shwartz-Ziv
    MILM
    AIFin
ArXivPDFHTML

Papers citing "Layer by Layer: Uncovering Hidden Representations in Language Models"

4 / 4 papers shown
Title
FLAME-MoE: A Transparent End-to-End Research Platform for Mixture-of-Experts Language Models
FLAME-MoE: A Transparent End-to-End Research Platform for Mixture-of-Experts Language Models
Hao Kang
Zichun Yu
Chenyan Xiong
MoE
34
0
0
26 May 2025
TRACE for Tracking the Emergence of Semantic Representations in Transformers
TRACE for Tracking the Emergence of Semantic Representations in Transformers
Nura Aljaafari
Danilo S. Carvalho
André Freitas
53
0
0
23 May 2025
Learning Interpretable Representations Leads to Semantically Faithful EEG-to-Text Generation
Learning Interpretable Representations Leads to Semantically Faithful EEG-to-Text Generation
Xiaozhao Liu
Dinggang Shen
Xihui Liu
29
0
0
21 May 2025
Do Language Models Use Their Depth Efficiently?
Do Language Models Use Their Depth Efficiently?
Róbert Csordás
Christopher D. Manning
Christopher Potts
77
0
0
20 May 2025
1