ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.17632
  4. Cited By
ReqBrain: Task-Specific Instruction Tuning of LLMs for AI-Assisted Requirements Generation

ReqBrain: Task-Specific Instruction Tuning of LLMs for AI-Assisted Requirements Generation

23 May 2025
Mohammad Kasra Habib
Daniel Graziotin
Stefan Wagner
ArXivPDFHTML

Papers citing "ReqBrain: Task-Specific Instruction Tuning of LLMs for AI-Assisted Requirements Generation"

28 / 28 papers shown
Title
Mitigating Large Language Model Hallucination with Faithful Finetuning
Mitigating Large Language Model Hallucination with Faithful Finetuning
Minda Hu
Bowei He
Yufei Wang
Liangyou Li
Chen Ma
Irwin King
HILM
67
8
0
17 Jun 2024
Task Arithmetic with LoRA for Continual Learning
Task Arithmetic with LoRA for Continual Learning
Rajas Chitale
Ankit Vaidya
Aditya Kane
Archana Ghotkar
58
16
0
04 Nov 2023
Building Real-World Meeting Summarization Systems using Large Language
  Models: A Practical Perspective
Building Real-World Meeting Summarization Systems using Large Language Models: A Practical Perspective
Md Tahmid Rahman Laskar
Xue-Yong Fu
Cheng-Hsiung Chen
TN ShashiBhushan
46
36
0
30 Oct 2023
Zephyr: Direct Distillation of LM Alignment
Zephyr: Direct Distillation of LM Alignment
Lewis Tunstall
E. Beeching
Nathan Lambert
Nazneen Rajani
Kashif Rasul
...
Nathan Habib
Nathan Sarrazin
Omar Sanseviero
Alexander M. Rush
Thomas Wolf
ALM
64
382
0
25 Oct 2023
Mistral 7B
Mistral 7B
Albert Q. Jiang
Alexandre Sablayrolles
A. Mensch
Chris Bamford
Devendra Singh Chaplot
...
Teven Le Scao
Thibaut Lavril
Thomas Wang
Timothée Lacroix
William El Sayed
MoE
LRM
38
2,102
0
10 Oct 2023
Investigating the Catastrophic Forgetting in Multimodal Large Language
  Models
Investigating the Catastrophic Forgetting in Multimodal Large Language Models
Yuexiang Zhai
Shengbang Tong
Xiao Li
Mu Cai
Qing Qu
Yong Jae Lee
Yi Ma
VLM
MLLM
CLL
89
79
0
19 Sep 2023
Large Language Models for Software Engineering: A Systematic Literature
  Review
Large Language Models for Software Engineering: A Systematic Literature Review
Xinying Hou
Yanjie Zhao
Yue Liu
Zhou Yang
Kailong Wang
Li Li
Xiapu Luo
David Lo
John C. Grundy
Haoyu Wang
52
355
0
21 Aug 2023
Continual Pre-Training of Large Language Models: How to (re)warm your
  model?
Continual Pre-Training of Large Language Models: How to (re)warm your model?
Kshitij Gupta
Benjamin Thérien
Adam Ibrahim
Mats L. Richter
Quentin G. Anthony
Eugene Belilovsky
Irina Rish
Timothée Lesort
KELM
70
104
0
08 Aug 2023
Llama 2: Open Foundation and Fine-Tuned Chat Models
Llama 2: Open Foundation and Fine-Tuned Chat Models
Hugo Touvron
Louis Martin
Kevin R. Stone
Peter Albert
Amjad Almahairi
...
Sharan Narang
Aurelien Rodriguez
Robert Stojnic
Sergey Edunov
Thomas Scialom
AI4MH
ALM
206
11,636
0
18 Jul 2023
Maybe Only 0.5% Data is Needed: A Preliminary Exploration of Low
  Training Data Instruction Tuning
Maybe Only 0.5% Data is Needed: A Preliminary Exploration of Low Training Data Instruction Tuning
Haowen Chen
Yiming Zhang
Qi Zhang
Hantao Yang
Xiaomeng Hu
Xuetao Ma
Yifan YangGong
Jiaqi Zhao
ALM
81
50
0
16 May 2023
Prompt-MIL: Boosting Multi-Instance Learning Schemes via Task-specific
  Prompt Tuning
Prompt-MIL: Boosting Multi-Instance Learning Schemes via Task-specific Prompt Tuning
Jingwei Zhang
S. Kapse
Ke Ma
Prateek Prasanna
Joel H. Saltz
Maria Vakalopoulou
Dimitris Samaras
VLM
97
17
0
21 Mar 2023
Context-faithful Prompting for Large Language Models
Context-faithful Prompting for Large Language Models
Wenxuan Zhou
Sheng Zhang
Hoifung Poon
Muhao Chen
KELM
34
61
0
20 Mar 2023
Two-stage LLM Fine-tuning with Less Specialization and More
  Generalization
Two-stage LLM Fine-tuning with Less Specialization and More Generalization
Yihan Wang
Si Si
Daliang Li
Michal Lukasik
Felix X. Yu
Cho-Jui Hsieh
Inderjit S Dhillon
Sanjiv Kumar
94
29
0
01 Nov 2022
Efficient Few-Shot Learning Without Prompts
Efficient Few-Shot Learning Without Prompts
Lewis Tunstall
Nils Reimers
Unso Eun Seo Jo
Luke Bates
Daniel Korat
Moshe Wasserblat
Oren Pereg
VLM
65
189
0
22 Sep 2022
Selection-Inference: Exploiting Large Language Models for Interpretable
  Logical Reasoning
Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning
Antonia Creswell
Murray Shanahan
I. Higgins
ReLM
LRM
62
352
0
19 May 2022
Training Compute-Optimal Large Language Models
Training Compute-Optimal Large Language Models
Jordan Hoffmann
Sebastian Borgeaud
A. Mensch
Elena Buchatskaya
Trevor Cai
...
Karen Simonyan
Erich Elsen
Jack W. Rae
Oriol Vinyals
Laurent Sifre
AI4TS
121
1,915
0
29 Mar 2022
Fine-Tuning can Distort Pretrained Features and Underperform
  Out-of-Distribution
Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution
Ananya Kumar
Aditi Raghunathan
Robbie Jones
Tengyu Ma
Percy Liang
OODD
96
655
0
21 Feb 2022
FrugalScore: Learning Cheaper, Lighter and Faster Evaluation Metricsfor
  Automatic Text Generation
FrugalScore: Learning Cheaper, Lighter and Faster Evaluation Metricsfor Automatic Text Generation
Moussa Kamal Eddine
Guokan Shang
A. Tixier
Michalis Vazirgiannis
42
26
0
16 Oct 2021
PPT: Pre-trained Prompt Tuning for Few-shot Learning
PPT: Pre-trained Prompt Tuning for Few-shot Learning
Yuxian Gu
Xu Han
Zhiyuan Liu
Minlie Huang
VLM
73
411
0
09 Sep 2021
Detecting Requirements Smells With Deep Learning: Experiences,
  Challenges and Future Work
Detecting Requirements Smells With Deep Learning: Experiences, Challenges and Future Work
Mohammad Kasra Habib
Stefan Wagner
Daniel Graziotin
UQCV
16
10
0
06 Aug 2021
LoRA: Low-Rank Adaptation of Large Language Models
LoRA: Low-Rank Adaptation of Large Language Models
J. E. Hu
Yelong Shen
Phillip Wallis
Zeyuan Allen-Zhu
Yuanzhi Li
Shean Wang
Lu Wang
Weizhu Chen
OffRL
AI4TS
AI4CE
ALM
AIMat
235
10,099
0
17 Jun 2021
Intrinsic Dimensionality Explains the Effectiveness of Language Model
  Fine-Tuning
Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning
Armen Aghajanyan
Luke Zettlemoyer
Sonal Gupta
75
549
1
22 Dec 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
500
41,106
0
28 May 2020
BERTScore: Evaluating Text Generation with BERT
BERTScore: Evaluating Text Generation with BERT
Tianyi Zhang
Varsha Kishore
Felix Wu
Kilian Q. Weinberger
Yoav Artzi
201
5,668
0
21 Apr 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
961
93,936
0
11 Oct 2018
ELICA: An Automated Tool for Dynamic Extraction of Requirements Relevant
  Information
ELICA: An Automated Tool for Dynamic Extraction of Requirements Relevant Information
Zahra Shakeri Hossein Abad
Vincenzo Gervasi
Didar Zowghi
K. Barker
13
21
0
21 Jul 2018
Why We Need New Evaluation Metrics for NLG
Why We Need New Evaluation Metrics for NLG
Jekaterina Novikova
Ondrej Dusek
Amanda Cercas Curry
Verena Rieser
67
456
0
21 Jul 2017
Overcoming catastrophic forgetting in neural networks
Overcoming catastrophic forgetting in neural networks
J. Kirkpatrick
Razvan Pascanu
Neil C. Rabinowitz
J. Veness
Guillaume Desjardins
...
A. Grabska-Barwinska
Demis Hassabis
Claudia Clopath
D. Kumaran
R. Hadsell
CLL
265
7,410
0
02 Dec 2016
1