ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.14751
  4. Cited By
Hazards from Increasingly Accessible Fine-Tuning of Downloadable
  Foundation Models

Hazards from Increasingly Accessible Fine-Tuning of Downloadable Foundation Models

22 December 2023
Alan Chan
Ben Bucknall
Herbie Bradley
David M. Krueger
ArXivPDFHTML

Papers citing "Hazards from Increasingly Accessible Fine-Tuning of Downloadable Foundation Models"

11 / 11 papers shown
Title
Opening the Scope of Openness in AI
Opening the Scope of Openness in AI
Tamara Paris
AJung Moon
Jin Guo
36
0
0
09 May 2025
Forecasting Open-Weight AI Model Growth on HuggingFace
Forecasting Open-Weight AI Model Growth on HuggingFace
Kushal Raj Bhandari
Pin-Yu Chen
Jianxi Gao
61
0
0
21 Feb 2025
Towards Data Governance of Frontier AI Models
Towards Data Governance of Frontier AI Models
Jason Hausenloy
Duncan McClements
Madhavendra Thakur
94
1
0
05 Dec 2024
Generative AI for Accessible and Inclusive Extended Reality
Generative AI for Accessible and Inclusive Extended Reality
Jens Grubert
Junlong Chen
Per Ola Kristensson
48
1
0
31 Oct 2024
Risks and Opportunities of Open-Source Generative AI
Risks and Opportunities of Open-Source Generative AI
Francisco Eiras
Aleksander Petrov
Bertie Vidgen
Christian Schroeder
Fabio Pizzati
...
Matthew Jackson
Phillip H. S. Torr
Trevor Darrell
Y. Lee
Jakob N. Foerster
65
18
0
14 May 2024
Will releasing the weights of future large language models grant
  widespread access to pandemic agents?
Will releasing the weights of future large language models grant widespread access to pandemic agents?
Anjali Gopal
Nathan Helm-Burger
Lenni Justen
Emily H. Soice
Tiffany Tzeng
Geetha Jeyapragasan
Simon Grimm
Benjamin Mueller
K. Esvelt
55
16
0
25 Oct 2023
Emergent autonomous scientific research capabilities of large language
  models
Emergent autonomous scientific research capabilities of large language models
Daniil A. Boiko
R. MacKnight
Gabe Gomes
ELM
LM&Ro
AI4CE
LLMAG
114
121
0
11 Apr 2023
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
  Across Scales and Tasks
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
240
822
0
14 Oct 2021
Raise a Child in Large Language Model: Towards Effective and
  Generalizable Fine-tuning
Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning
Runxin Xu
Fuli Luo
Zhiyuan Zhang
Chuanqi Tan
Baobao Chang
Songfang Huang
Fei Huang
LRM
151
180
0
13 Sep 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
298
3,906
0
18 Apr 2021
ZeRO-Offload: Democratizing Billion-Scale Model Training
ZeRO-Offload: Democratizing Billion-Scale Model Training
Jie Ren
Samyam Rajbhandari
Reza Yazdani Aminabadi
Olatunji Ruwase
Shuangyang Yang
Minjia Zhang
Dong Li
Yuxiong He
MoE
184
419
0
18 Jan 2021
1