ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.00212
  4. Cited By
Low-Rank HOCA: Efficient High-Order Cross-Modal Attention for Video
  Captioning

Low-Rank HOCA: Efficient High-Order Cross-Modal Attention for Video Captioning

1 November 2019
Tao Jin
Siyu Huang
Yingming Li
Zhongfei Zhang
ArXiv (abs)PDFHTML

Papers citing "Low-Rank HOCA: Efficient High-Order Cross-Modal Attention for Video Captioning"

3 / 3 papers shown
Title
BM-NAS: Bilevel Multimodal Neural Architecture Search
BM-NAS: Bilevel Multimodal Neural Architecture Search
Yihang Yin
Siyu Huang
Xiang Zhang
84
27
0
19 Apr 2021
Learning Modality Interaction for Temporal Sentence Localization and
  Event Captioning in Videos
Learning Modality Interaction for Temporal Sentence Localization and Event Captioning in Videos
Shaoxiang Chen
Wenhao Jiang
Wei Liu
Yu-Gang Jiang
99
102
0
28 Jul 2020
SBAT: Video Captioning with Sparse Boundary-Aware Transformer
SBAT: Video Captioning with Sparse Boundary-Aware Transformer
Tao Jin
Siyu Huang
Ming Chen
Yingming Li
Zhongfei Zhang
98
56
0
23 Jul 2020
1