ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.16985
  4. Cited By
Unveiling LLM Mechanisms Through Neural ODEs and Control Theory

Unveiling LLM Mechanisms Through Neural ODEs and Control Theory

23 June 2024
Yukun Zhang
Qi Dong
ArXivPDFHTML

Papers citing "Unveiling LLM Mechanisms Through Neural ODEs and Control Theory"

22 / 22 papers shown
Title
Learning Adaptive Hydrodynamic Models Using Neural ODEs in Complex
  Conditions
Learning Adaptive Hydrodynamic Models Using Neural ODEs in Complex Conditions
Cong Wang
Aoming Liang
Fei Han
Xinyu Zeng
Zhibin Li
Dixia Fan
Jens Kober
AI4CE
73
2
0
01 Oct 2024
Capabilities of Large Language Models in Control Engineering: A
  Benchmark Study on GPT-4, Claude 3 Opus, and Gemini 1.0 Ultra
Capabilities of Large Language Models in Control Engineering: A Benchmark Study on GPT-4, Claude 3 Opus, and Gemini 1.0 Ultra
Darioush Kevian
U. Syed
Xing-ming Guo
Aaron J. Havens
Geir Dullerud
Peter M. Seiler
Lianhui Qin
Bin Hu
ELM
63
33
0
04 Apr 2024
SelfIE: Self-Interpretation of Large Language Model Embeddings
SelfIE: Self-Interpretation of Large Language Model Embeddings
Haozhe Chen
Carl Vondrick
Chengzhi Mao
55
25
0
16 Mar 2024
Unification of Symmetries Inside Neural Networks: Transformer,
  Feedforward and Neural ODE
Unification of Symmetries Inside Neural Networks: Transformer, Feedforward and Neural ODE
Koji Hashimoto
Yuji Hirono
Akiyoshi Sannai
AI4CE
65
7
0
04 Feb 2024
Rethinking Interpretability in the Era of Large Language Models
Rethinking Interpretability in the Era of Large Language Models
Chandan Singh
J. Inala
Michel Galley
Rich Caruana
Jianfeng Gao
LRM
AI4CE
106
70
0
30 Jan 2024
Function Vectors in Large Language Models
Function Vectors in Large Language Models
Eric Todd
Millicent Li
Arnab Sen Sharma
Aaron Mueller
Byron C. Wallace
David Bau
48
114
0
23 Oct 2023
Can Large Language Models Explain Themselves? A Study of LLM-Generated
  Self-Explanations
Can Large Language Models Explain Themselves? A Study of LLM-Generated Self-Explanations
Shiyuan Huang
Siddarth Mamidanna
Shreedhar Jangam
Yilun Zhou
Leilani H. Gilpin
LRM
MILM
ELM
78
74
0
17 Oct 2023
DriveGPT4: Interpretable End-to-end Autonomous Driving via Large
  Language Model
DriveGPT4: Interpretable End-to-end Autonomous Driving via Large Language Model
Zhenhua Xu
Yujia Zhang
Enze Xie
Zhen Zhao
Yong Guo
Kwan-Yee. K. Wong
Zhenguo Li
Hengshuang Zhao
MLLM
67
290
0
02 Oct 2023
Reasoning on Graphs: Faithful and Interpretable Large Language Model
  Reasoning
Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning
Linhao Luo
Yuan-Fang Li
Gholamreza Haffari
Shirui Pan
RALM
LRM
94
229
0
02 Oct 2023
Sequential Integrated Gradients: a simple but effective method for
  explaining language models
Sequential Integrated Gradients: a simple but effective method for explaining language models
Joseph Enguehard
42
41
0
25 May 2023
Language in a Bottle: Language Model Guided Concept Bottlenecks for
  Interpretable Image Classification
Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification
Yue Yang
Artemis Panagopoulou
Shenghao Zhou
Daniel Jin
Chris Callison-Burch
Mark Yatskar
101
226
0
21 Nov 2022
Interpretability in the Wild: a Circuit for Indirect Object
  Identification in GPT-2 small
Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small
Kevin Wang
Alexandre Variengien
Arthur Conmy
Buck Shlegeris
Jacob Steinhardt
296
553
0
01 Nov 2022
Selection-Inference: Exploiting Large Language Models for Interpretable
  Logical Reasoning
Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning
Antonia Creswell
Murray Shanahan
I. Higgins
ReLM
LRM
92
361
0
19 May 2022
Locating and Editing Factual Associations in GPT
Locating and Editing Factual Associations in GPT
Kevin Meng
David Bau
A. Andonian
Yonatan Belinkov
KELM
236
1,345
0
10 Feb 2022
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
736
41,894
0
28 May 2020
Longformer: The Long-Document Transformer
Longformer: The Long-Document Transformer
Iz Beltagy
Matthew E. Peters
Arman Cohan
RALM
VLM
145
4,061
0
10 Apr 2020
Latent ODEs for Irregularly-Sampled Time Series
Latent ODEs for Irregularly-Sampled Time Series
Yulia Rubanova
Ricky T. Q. Chen
David Duvenaud
BDL
AI4TS
77
257
0
08 Jul 2019
What do you learn from context? Probing for sentence structure in
  contextualized word representations
What do you learn from context? Probing for sentence structure in contextualized word representations
Ian Tenney
Patrick Xia
Berlin Chen
Alex Jinpeng Wang
Adam Poliak
...
Najoung Kim
Benjamin Van Durme
Samuel R. Bowman
Dipanjan Das
Ellie Pavlick
173
859
0
15 May 2019
Neural Ordinary Differential Equations
Neural Ordinary Differential Equations
T. Chen
Yulia Rubanova
J. Bettencourt
David Duvenaud
AI4CE
399
5,099
0
19 Jun 2018
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
21,864
0
22 May 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
177
5,986
0
04 Mar 2017
The Mythos of Model Interpretability
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
178
3,690
0
10 Jun 2016
1