ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.17224
  4. Cited By
Large Language Models are Interpretable Learners

Large Language Models are Interpretable Learners

25 June 2024
Ruochen Wang
Si Si
Felix X. Yu
Dorothea Wiesmann
Cho-Jui Hsieh
Inderjit Dhillon
ArXivPDFHTML

Papers citing "Large Language Models are Interpretable Learners"

4 / 4 papers shown
Title
Discovering Chunks in Neural Embeddings for Interpretability
Discovering Chunks in Neural Embeddings for Interpretability
Shuchen Wu
Stephan Alaniz
Eric Schulz
Zeynep Akata
47
0
0
03 Feb 2025
Post-hoc Concept Bottleneck Models
Post-hoc Concept Bottleneck Models
Mert Yuksekgonul
Maggie Wang
James Zou
145
185
0
31 May 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
366
12,003
0
04 Mar 2022
Learning Differentiable Programs with Admissible Neural Heuristics
Learning Differentiable Programs with Admissible Neural Heuristics
Ameesh Shah
Eric Zhan
Jennifer J. Sun
Abhinav Verma
Yisong Yue
Swarat Chaudhuri
149
43
0
23 Jul 2020
1