ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.12777
  4. Cited By
ReINTEL Challenge 2020: A Comparative Study of Hybrid Deep Neural
  Network for Reliable Intelligence Identification on Vietnamese SNSs

ReINTEL Challenge 2020: A Comparative Study of Hybrid Deep Neural Network for Reliable Intelligence Identification on Vietnamese SNSs

27 September 2021
Hoang Viet Trinh
Tung Tien Bui
Tam Minh Nguyen
Huy Quang Dao
Quang Huu Pham
Ngoc N. Tran
Ta Minh Thanh
ArXiv (abs)PDFHTML

Papers citing "ReINTEL Challenge 2020: A Comparative Study of Hybrid Deep Neural Network for Reliable Intelligence Identification on Vietnamese SNSs"

13 / 13 papers shown
Title
From Universal Language Model to Downstream Task: Improving
  RoBERTa-Based Vietnamese Hate Speech Detection
From Universal Language Model to Downstream Task: Improving RoBERTa-Based Vietnamese Hate Speech Detection
Quang Huu Pham
Viet-Anh Nguyen
Linh Bao Doan
Ngoc N. Tran
Ta Minh Thanh
20
11
0
24 Feb 2021
ReINTEL: A Multimodal Data Challenge for Responsible Information
  Identification on Social Network Sites
ReINTEL: A Multimodal Data Challenge for Responsible Information Identification on Social Network Sites
Duc-Trong Le
Xuan-Son Vu
Nhu-Dung To
Huu Nguyen
Thuy-Trinh Nguyen
...
A. Nguyen
Minh-Duc Hoang
Nghia T. Le
Huyen Thi Minh Nguyen
Hoang D. Nguyen
69
14
0
16 Dec 2020
Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
Suchin Gururangan
Ana Marasović
Swabha Swayamdipta
Kyle Lo
Iz Beltagy
Doug Downey
Noah A. Smith
VLMAI4CECLL
155
2,428
0
23 Apr 2020
PhoBERT: Pre-trained language models for Vietnamese
PhoBERT: Pre-trained language models for Vietnamese
Dat Quoc Nguyen
A. Nguyen
217
355
0
02 Mar 2020
RoBERTa: A Robustly Optimized BERT Pretraining Approach
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
665
24,464
0
26 Jul 2019
When Does Label Smoothing Help?
When Does Label Smoothing Help?
Rafael Müller
Simon Kornblith
Geoffrey E. Hinton
UQCV
195
1,950
0
06 Jun 2019
A Survey of Fake News: Fundamental Theories, Detection Methods, and
  Opportunities
A Survey of Fake News: Fundamental Theories, Detection Methods, and Opportunities
Xinyi Zhou
R. Zafarani
97
281
0
02 Dec 2018
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLMSSLSSeg
1.8K
94,891
0
11 Oct 2018
Deep contextualized word representations
Deep contextualized word representations
Matthew E. Peters
Mark Neumann
Mohit Iyyer
Matt Gardner
Christopher Clark
Kenton Lee
Luke Zettlemoyer
NAI
214
11,556
0
15 Feb 2018
Layer Normalization
Layer Normalization
Jimmy Lei Ba
J. Kiros
Geoffrey E. Hinton
413
10,494
0
21 Jul 2016
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
1.9K
150,115
0
22 Dec 2014
Efficient Estimation of Word Representations in Vector Space
Efficient Estimation of Word Representations in Vector Space
Tomas Mikolov
Kai Chen
G. Corrado
J. Dean
3DV
680
31,512
0
16 Jan 2013
Feature-Weighted Linear Stacking
Feature-Weighted Linear Stacking
Joseph Sill
G. Takács
Lester W. Mackey
David Lin
102
248
0
03 Nov 2009
1