ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.00528
  4. Cited By
Unimodal Intermediate Training for Multimodal Meme Sentiment
  Classification

Unimodal Intermediate Training for Multimodal Meme Sentiment Classification

1 August 2023
Muzhaffar Hazman
Susan Mckeever
Josephine Griffith
ArXivPDFHTML

Papers citing "Unimodal Intermediate Training for Multimodal Meme Sentiment Classification"

8 / 8 papers shown
Title
Transferability in Deep Learning: A Survey
Transferability in Deep Learning: A Survey
Junguang Jiang
Yang Shu
Jianmin Wang
Mingsheng Long
OOD
77
102
0
15 Jan 2022
MOMENTA: A Multimodal Framework for Detecting Harmful Memes and Their
  Targets
MOMENTA: A Multimodal Framework for Detecting Harmful Memes and Their Targets
Shraman Pramanick
Shivam Sharma
Dimitar Dimitrov
Md. Shad Akhtar
Preslav Nakov
Tanmoy Chakraborty
60
128
0
11 Sep 2021
SemEval-2020 Task 8: Memotion Analysis -- The Visuo-Lingual Metaphor!
SemEval-2020 Task 8: Memotion Analysis -- The Visuo-Lingual Metaphor!
Chhavi Sharma
Deepesh Bhageria
W. Scott
Srinivas Pykl
A. Das
Tanmoy Chakraborty
Viswanath Pulabaigari
Björn Gambäck
48
175
0
09 Aug 2020
IITK at SemEval-2020 Task 8: Unimodal and Bimodal Sentiment Analysis of
  Internet Memes
IITK at SemEval-2020 Task 8: Unimodal and Bimodal Sentiment Analysis of Internet Memes
Vishal Keswani
Sakshi Singh
Suryansh Agarwal
Ashutosh Modi
27
36
0
21 Jul 2020
The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes
The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes
Douwe Kiela
Hamed Firooz
Aravind Mohan
Vedanuj Goswami
Amanpreet Singh
Pratik Ringshia
Davide Testuggine
87
604
0
10 May 2020
Intermediate-Task Transfer Learning with Pretrained Models for Natural
  Language Understanding: When and Why Does It Work?
Intermediate-Task Transfer Learning with Pretrained Models for Natural Language Understanding: When and Why Does It Work?
Yada Pruksachatkun
Jason Phang
Haokun Liu
Phu Mon Htut
Xiaoyi Zhang
Richard Yuanzhe Pang
Clara Vania
Katharina Kann
Samuel R. Bowman
CLL
LRM
50
197
0
01 May 2020
BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions
BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions
Christopher Clark
Kenton Lee
Ming-Wei Chang
Tom Kwiatkowski
Michael Collins
Kristina Toutanova
205
1,511
0
24 May 2019
WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive
  Meaning Representations
WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations
Mohammad Taher Pilehvar
Jose Camacho-Collados
167
485
0
28 Aug 2018
1