ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.03759
  4. Cited By
IT5: Text-to-text Pretraining for Italian Language Understanding and
  Generation

IT5: Text-to-text Pretraining for Italian Language Understanding and Generation

7 March 2022
Gabriele Sarti
Malvina Nissim
    AILaw
ArXivPDFHTML

Papers citing "IT5: Text-to-text Pretraining for Italian Language Understanding and Generation"

21 / 21 papers shown
Title
Linguistic Knowledge Can Enhance Encoder-Decoder Models (If You Let It)
Linguistic Knowledge Can Enhance Encoder-Decoder Models (If You Let It)
Alessio Miaschi
F. Dell’Orletta
Giulia Venturi
55
0
0
27 Feb 2024
Annotation and Classification of Relevant Clauses in
  Terms-and-Conditions Contracts
Annotation and Classification of Relevant Clauses in Terms-and-Conditions Contracts
Pietro Giovanni Bizzaro
Elena Della Valentina
Maurizio Napolitano
Nadia Mana
Massimo Zancanaro
AILaw
21
2
0
22 Feb 2024
Typhoon: Thai Large Language Models
Typhoon: Thai Large Language Models
Kunat Pipatanakul
Phatrasek Jirabovonvisut
Potsawee Manakul
Sittipong Sripaisarnmongkol
Ruangsak Patomwong
Pathomporn Chokchainant
Kasima Tharnpipitchai
35
16
0
21 Dec 2023
LLaMAntino: LLaMA 2 Models for Effective Text Generation in Italian
  Language
LLaMAntino: LLaMA 2 Models for Effective Text Generation in Italian Language
Pierpaolo Basile
Elio Musacchio
Marco Polignano
Lucia Siciliani
G. Fiameni
Giovanni Semeraro
43
36
0
15 Dec 2023
Cerbero-7B: A Leap Forward in Language-Specific LLMs Through Enhanced
  Chat Corpus Generation and Evaluation
Cerbero-7B: A Leap Forward in Language-Specific LLMs Through Enhanced Chat Corpus Generation and Evaluation
Federico A. Galatolo
M. G. Cimino
30
5
0
27 Nov 2023
Sequence-to-Sequence Spanish Pre-trained Language Models
Sequence-to-Sequence Spanish Pre-trained Language Models
Vladimir Araujo
Maria Mihaela Truşcǎ
Rodrigo Tufino
Marie-Francine Moens
26
2
0
20 Sep 2023
Legal Summarisation through LLMs: The PRODIGIT Project
Legal Summarisation through LLMs: The PRODIGIT Project
Thiago Raulino Dal Pont
F. Galli
Andrea Loreggia
Giuseppe Pisano
R. Rovatti
Giovanni Sartor
AILaw
13
8
0
04 Aug 2023
Camoscio: an Italian Instruction-tuned LLaMA
Camoscio: an Italian Instruction-tuned LLaMA
Andrea Santilli
Emanuele Rodolà
19
26
0
31 Jul 2023
Fauno: The Italian Large Language Model that will leave you senza
  parole!
Fauno: The Italian Large Language Model that will leave you senza parole!
Andrea Bacciu
Giovanni Trappolini
Andrea Santilli
Emanuele Rodolà
Fabrizio Silvestri
20
18
0
26 Jun 2023
Response Generation in Longitudinal Dialogues: Which Knowledge
  Representation Helps?
Response Generation in Longitudinal Dialogues: Which Knowledge Representation Helps?
Seyed Mahed Mousavi
Simone Caldarella
Giuseppe Riccardi
24
5
0
25 May 2023
HeRo: RoBERTa and Longformer Hebrew Language Models
HeRo: RoBERTa and Longformer Hebrew Language Models
Vitaly Shalumov
Harel Haskey
VLM
20
6
0
18 Apr 2023
Sequence to sequence pretraining for a less-resourced Slovenian language
Sequence to sequence pretraining for a less-resourced Slovenian language
Matej Ulčar
Marko Robnik-Šikonja
AIMat
19
17
0
28 Jul 2022
esCorpius: A Massive Spanish Crawling Corpus
esCorpius: A Massive Spanish Crawling Corpus
Asier Gutiérrez-Fandiño
David Pérez-Fernández
Jordi Armengol-Estapé
D. Griol
Z. Callejas
38
2
0
30 Jun 2022
ViT5: Pretrained Text-to-Text Transformer for Vietnamese Language
  Generation
ViT5: Pretrained Text-to-Text Transformer for Vietnamese Language Generation
Long Phan
H. Tran
Hieu Duy Nguyen
Trieu H. Trinh
ViT
39
61
0
13 May 2022
Multitask Prompted Training Enables Zero-Shot Task Generalization
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
213
1,657
0
15 Oct 2021
Scale Efficiently: Insights from Pre-training and Fine-tuning
  Transformers
Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
Yi Tay
Mostafa Dehghani
J. Rao
W. Fedus
Samira Abnar
Hyung Won Chung
Sharan Narang
Dani Yogatama
Ashish Vaswani
Donald Metzler
206
110
0
22 Sep 2021
AraT5: Text-to-Text Transformers for Arabic Language Generation
AraT5: Text-to-Text Transformers for Arabic Language Generation
El Moatez Billah Nagoudi
AbdelRahim Elmadany
Muhammad Abdul-Mageed
86
118
0
31 Aug 2021
On the interaction of automatic evaluation and task framing in headline
  style transfer
On the interaction of automatic evaluation and task framing in headline style transfer
Lorenzo De Mattei
Michele Cafagna
Huiyuan Lai
F. Dell’Orletta
Malvina Nissim
Albert Gatt
20
2
0
05 Jan 2021
How Good is Your Tokenizer? On the Monolingual Performance of
  Multilingual Language Models
How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models
Phillip Rust
Jonas Pfeiffer
Ivan Vulić
Sebastian Ruder
Iryna Gurevych
80
235
0
31 Dec 2020
Synthetic Data Augmentation for Zero-Shot Cross-Lingual Question
  Answering
Synthetic Data Augmentation for Zero-Shot Cross-Lingual Question Answering
Arij Riabi
Thomas Scialom
Rachel Keraron
Benoît Sagot
Djamé Seddah
Jacopo Staiano
142
52
0
23 Oct 2020
What the [MASK]? Making Sense of Language-Specific BERT Models
What the [MASK]? Making Sense of Language-Specific BERT Models
Debora Nozza
Federico Bianchi
Dirk Hovy
84
105
0
05 Mar 2020
1