ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.00968
  4. Cited By
Project PIAF: Building a Native French Question-Answering Dataset

Project PIAF: Building a Native French Question-Answering Dataset

2 July 2020
Rachel Keraron
Guillaume Lancrenon
M. Bras
Frédéric Allary
Gilles Moyse
Thomas Scialom
Edmundo-Pavel Soriano-Morales
Jacopo Staiano
ArXivPDFHTML

Papers citing "Project PIAF: Building a Native French Question-Answering Dataset"

7 / 7 papers shown
Title
The Lucie-7B LLM and the Lucie Training Dataset: Open resources for multilingual language generation
The Lucie-7B LLM and the Lucie Training Dataset: Open resources for multilingual language generation
Olivier Gouvert
Julie Hunter
Jérôme Louradour
Christophe Cerisara
Evan Dufraisse
Yaya Sy
Laura Rivière
Jean-Pierre Lorré
OpenLLM-France community
220
0
0
15 Mar 2025
TIGQA:An Expert Annotated Question Answering Dataset in Tigrinya
TIGQA:An Expert Annotated Question Answering Dataset in Tigrinya
Hailay Teklehaymanot
Dren Fazlija
Niloy Ganguly
Gourab K. Patro
Wolfgang Nejdl
36
0
0
26 Apr 2024
FQuAD2.0: French Question Answering and knowing that you know nothing
FQuAD2.0: French Question Answering and knowing that you know nothing
Quentin Heinrich
Gautier Viaud
Wacim Belblidia
11
8
0
27 Sep 2021
Synthetic Data Augmentation for Zero-Shot Cross-Lingual Question
  Answering
Synthetic Data Augmentation for Zero-Shot Cross-Lingual Question Answering
Arij Riabi
Thomas Scialom
Rachel Keraron
Benoît Sagot
Djamé Seddah
Jacopo Staiano
142
52
0
23 Oct 2020
FQuAD: French Question Answering Dataset
FQuAD: French Question Answering Dataset
Martin d'Hoffschmidt
Wacim Belblidia
Tom Brendlé
Quentin Heinrich
Maxime Vidal
26
98
0
14 Feb 2020
MLQA: Evaluating Cross-lingual Extractive Question Answering
MLQA: Evaluating Cross-lingual Extractive Question Answering
Patrick Lewis
Barlas Oğuz
Ruty Rinott
Sebastian Riedel
Holger Schwenk
ELM
246
495
0
16 Oct 2019
Are We Modeling the Task or the Annotator? An Investigation of Annotator
  Bias in Natural Language Understanding Datasets
Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets
Mor Geva
Yoav Goldberg
Jonathan Berant
242
320
0
21 Aug 2019
1