
LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
Papers citing "LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention"
50 / 586 papers shown
Title |
---|
![]() TG-LLaVA: Text Guided LLaVA via Learnable Latent Embeddings Dawei Yan Pengcheng Li Yang Li Hao Chen Qingguo Chen Weihua Luo Wei Dong Qingsen Yan Haokui Zhang Chunhua Shen |
![]() IW-Bench: Evaluating Large Multimodal Models for Converting Image-to-Web Hongcheng Guo Wei Zhang Junhao Chen Yaonan Gu Jian Yang ...Binyuan Hui Tianyu Liu Jianxin Ma Chang Zhou Zhoujun Li |
![]() Hint-AD: Holistically Aligned Interpretability in End-to-End Autonomous
Driving Kairui Ding Boyuan Chen Yuchen Su Huan-ang Gao Bu Jin ...Wuqiang Zhang Xiaohui Li Paul Barsch Hongyang Li Hao Zhao |