ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.04928
  4. Cited By
CLIPVQA:Video Quality Assessment via CLIP

CLIPVQA:Video Quality Assessment via CLIP

6 July 2024
Fengchuang Xing
Mingjie Li
Yuan-Gen Wang
Guopu Zhu
Xiaochun Cao
    CLIP
    ViT
ArXivPDFHTML

Papers citing "CLIPVQA:Video Quality Assessment via CLIP"

12 / 12 papers shown
Title
Adaptive Mixed-Scale Feature Fusion Network for Blind AI-Generated Image
  Quality Assessment
Adaptive Mixed-Scale Feature Fusion Network for Blind AI-Generated Image Quality Assessment
Tianwei Zhou
Songbai Tan
Wei Zhou
Yu Luo
Yuan-Gen Wang
Guanghui Yue
EGVM
67
11
0
23 Apr 2024
Blind Image Quality Assessment via Vision-Language Correspondence: A
  Multitask Learning Perspective
Blind Image Quality Assessment via Vision-Language Correspondence: A Multitask Learning Perspective
Weixia Zhang
Guangtao Zhai
Ying Wei
Xiaokang Yang
Kede Ma
VLM
74
177
0
27 Mar 2023
Expanding Language-Image Pretrained Models for General Video Recognition
Expanding Language-Image Pretrained Models for General Video Recognition
Bolin Ni
Houwen Peng
Minghao Chen
Songyang Zhang
Gaofeng Meng
Jianlong Fu
Shiming Xiang
Haibin Ling
VLM
CLIP
ViT
82
319
0
04 Aug 2022
Deep Neural Network for Blind Visual Quality Assessment of 4K Content
Deep Neural Network for Blind Visual Quality Assessment of 4K Content
Wei Lu
Wei Sun
Xiongkuo Min
Wenhan Zhu
Quan Zhou
Junxia He
Qiyuan Wang
Zicheng Zhang
Tao Wang
Guangtao Zhai
33
49
0
09 Jun 2022
Prompting Visual-Language Models for Efficient Video Understanding
Prompting Visual-Language Models for Efficient Video Understanding
Chen Ju
Tengda Han
Kunhao Zheng
Ya Zhang
Weidi Xie
VPVLM
VLM
54
371
0
08 Dec 2021
CLIP-Adapter: Better Vision-Language Models with Feature Adapters
CLIP-Adapter: Better Vision-Language Models with Feature Adapters
Peng Gao
Shijie Geng
Renrui Zhang
Teli Ma
Rongyao Fang
Yongfeng Zhang
Hongsheng Li
Yu Qiao
VLM
CLIP
204
1,011
0
09 Oct 2021
VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text
  Understanding
VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding
Hu Xu
Gargi Ghosh
Po-Yao (Bernie) Huang
Dmytro Okhonko
Armen Aghajanyan
Florian Metze
Luke Zettlemoyer
Florian Metze Luke Zettlemoyer Christoph Feichtenhofer
CLIP
VLM
293
567
0
28 Sep 2021
Learning to Prompt for Vision-Language Models
Learning to Prompt for Vision-Language Models
Kaiyang Zhou
Jingkang Yang
Chen Change Loy
Ziwei Liu
VPVLM
CLIP
VLM
443
2,340
0
02 Sep 2021
Scaling Up Visual and Vision-Language Representation Learning With Noisy
  Text Supervision
Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision
Chao Jia
Yinfei Yang
Ye Xia
Yi-Ting Chen
Zarana Parekh
Hieu H. Pham
Quoc V. Le
Yun-hsuan Sung
Zhen Li
Tom Duerig
VLM
CLIP
392
3,778
0
11 Feb 2021
Is Space-Time Attention All You Need for Video Understanding?
Is Space-Time Attention All You Need for Video Understanding?
Gedas Bertasius
Heng Wang
Lorenzo Torresani
ViT
327
2,016
0
09 Feb 2021
RAPIQUE: Rapid and Accurate Video Quality Prediction of User Generated
  Content
RAPIQUE: Rapid and Accurate Video Quality Prediction of User Generated Content
Zhengzhong Tu
Xiangxu Yu
Yilin Wang
Neil Birkbeck
Balu Adsumilli
A. Bovik
36
152
0
26 Jan 2021
Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
Zihang Dai
Zhilin Yang
Yiming Yang
J. Carbonell
Quoc V. Le
Ruslan Salakhutdinov
VLM
140
3,714
0
09 Jan 2019
1