ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.17441
  4. Cited By
Towards Making Flowchart Images Machine Interpretable

Towards Making Flowchart Images Machine Interpretable

29 January 2025
Shivalika Singh
Prajwal Gatti
Yogesh Kumar
Vikash Yadav
Anand Mishra
ArXiv (abs)PDFHTML

Papers citing "Towards Making Flowchart Images Machine Interpretable"

21 / 21 papers shown
Title
Follow the Flow: Fine-grained Flowchart Attribution with Neurosymbolic Agents
Follow the Flow: Fine-grained Flowchart Attribution with Neurosymbolic Agents
Manan Suri
Puneet Mathur
Nedim Lipka
Franck Dernoncourt
Ryan Rossi
Vivek Gupta
Dinesh Manocha
67
0
0
02 Jun 2025
Arrow-Guided VLM: Enhancing Flowchart Understanding via Arrow Direction Encoding
Arrow-Guided VLM: Enhancing Flowchart Understanding via Arrow Direction Encoding
Takamitsu Omasa
Ryo Koshihara
Masumi Morishige
73
0
0
09 May 2025
StarFlow: Generating Structured Workflow Outputs From Sketch Images
StarFlow: Generating Structured Workflow Outputs From Sketch Images
Patrice Bechard
Chao Wang
Amirhossein Abaskohi
Juan A. Rodriguez
Christopher Pal
David Vazquez
Spandana Gella
Sai Rajeswar
Perouz Taslakian
109
0
0
27 Mar 2025
Automated Assessment of Multimodal Answer Sheets in the STEM domain
Automated Assessment of Multimodal Answer Sheets in the STEM domain
Rajlaxmi Patil
Aditya Kulkarni
Ruturaj Ghatage
Sharvi Endait
Geetanjali Kale
Raviraj Joshi
59
1
0
24 Sep 2024
FlowVQA: Mapping Multimodal Logic in Visual Question Answering with
  Flowcharts
FlowVQA: Mapping Multimodal Logic in Visual Question Answering with Flowcharts
Shubhankar Singh
Purvi Chaurasia
Yerram Varun
Pranshu Pandya
Vatsal Gupta
Vivek Gupta
Dan Roth
67
7
0
27 Jun 2024
TrOCR: Transformer-based Optical Character Recognition with Pre-trained
  Models
TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models
Minghao Li
Tengchao Lv
Jingye Chen
Lei Cui
Yijuan Lu
D. Florêncio
Cha Zhang
Zhoujun Li
Furu Wei
ViT
252
375
0
21 Sep 2021
CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for
  Code Understanding and Generation
CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation
Yue Wang
Weishi Wang
Shafiq Joty
Guosheng Lin
383
1,617
0
02 Sep 2021
Program Synthesis with Large Language Models
Program Synthesis with Large Language Models
Jacob Austin
Augustus Odena
Maxwell Nye
Maarten Bosma
Henryk Michalewski
...
Ellen Jiang
Carrie J. Cai
Michael Terry
Quoc V. Le
Charles Sutton
ELMAIMatReCodALM
218
2,024
0
16 Aug 2021
Unified Pre-training for Program Understanding and Generation
Unified Pre-training for Program Understanding and Generation
Wasi Uddin Ahmad
Saikat Chakraborty
Baishakhi Ray
Kai-Wei Chang
156
775
0
10 Mar 2021
CodeBLEU: a Method for Automatic Evaluation of Code Synthesis
CodeBLEU: a Method for Automatic Evaluation of Code Synthesis
Shuo Ren
Daya Guo
Shuai Lu
Long Zhou
Shujie Liu
Duyu Tang
Neel Sundaresan
M. Zhou
Ambrosio Blanco
Shuai Ma
ELM
165
546
0
22 Sep 2020
GraphCodeBERT: Pre-training Code Representations with Data Flow
GraphCodeBERT: Pre-training Code Representations with Data Flow
Daya Guo
Shuo Ren
Shuai Lu
Zhangyin Feng
Duyu Tang
...
Dawn Drain
Neel Sundaresan
Jian Yin
Daxin Jiang
M. Zhou
199
1,164
0
17 Sep 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
1.2K
42,753
0
28 May 2020
CodeBERT: A Pre-Trained Model for Programming and Natural Languages
CodeBERT: A Pre-Trained Model for Programming and Natural Languages
Zhangyin Feng
Daya Guo
Duyu Tang
Nan Duan
Xiaocheng Feng
...
Linjun Shou
Bing Qin
Ting Liu
Daxin Jiang
Ming Zhou
238
2,727
0
19 Feb 2020
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language
  Generation, Translation, and Comprehension
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
M. Lewis
Yinhan Liu
Naman Goyal
Marjan Ghazvininejad
Abdel-rahman Mohamed
Omer Levy
Veselin Stoyanov
Luke Zettlemoyer
AIMatVLM
270
10,934
0
29 Oct 2019
Exploring the Limits of Transfer Learning with a Unified Text-to-Text
  Transformer
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel
Noam M. Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
AIMat
984
20,462
0
23 Oct 2019
RoBERTa: A Robustly Optimized BERT Pretraining Approach
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
1.1K
24,645
0
26 Jul 2019
MASS: Masked Sequence to Sequence Pre-training for Language Generation
MASS: Masked Sequence to Sequence Pre-training for Language Generation
Kaitao Song
Xu Tan
Tao Qin
Jianfeng Lu
Tie-Yan Liu
162
967
0
07 May 2019
Character Region Awareness for Text Detection
Character Region Awareness for Text Detection
Youngmin Baek
Bado Lee
Dongyoon Han
Sangdoo Yun
Hwalsuk Lee
78
786
0
03 Apr 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLMSSLSSeg
2.0K
95,668
0
11 Oct 2018
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
1.0K
133,589
0
12 Jun 2017
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
2.5K
150,704
0
22 Dec 2014
1