ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.08989
  4. Cited By
Zeroth-Order Fine-Tuning of LLMs in Random Subspaces

Zeroth-Order Fine-Tuning of LLMs in Random Subspaces

11 October 2024
Ziming Yu
Pan Zhou
Sike Wang
Jia Li
Hua Huang
ArXivPDFHTML

Papers citing "Zeroth-Order Fine-Tuning of LLMs in Random Subspaces"

2 / 2 papers shown
Title
Stochastic Subspace Descent Accelerated via Bi-fidelity Line Search
Stochastic Subspace Descent Accelerated via Bi-fidelity Line Search
Nuojin Cheng
Alireza Doostan
Stephen Becker
39
0
0
30 Apr 2025
MaZO: Masked Zeroth-Order Optimization for Multi-Task Fine-Tuning of Large Language Models
MaZO: Masked Zeroth-Order Optimization for Multi-Task Fine-Tuning of Large Language Models
Zhen Zhang
Yi Yang
Kai Zhen
Nathan Susanj
Athanasios Mouchtaris
Siegfried Kunzmann
Zheng Zhang
54
0
0
17 Feb 2025
1