ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.03163
  4. Cited By
Patch-Based Stochastic Attention for Image Editing
v1v2v3v4 (latest)

Patch-Based Stochastic Attention for Image Editing

7 February 2022
Nicolas Cherel
Andrés Almansa
Y. Gousseau
A. Newson
ArXiv (abs)PDFHTMLGithub (7★)

Papers citing "Patch-Based Stochastic Attention for Image Editing"

25 / 25 papers shown
Title
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
Ze Liu
Yutong Lin
Yue Cao
Han Hu
Yixuan Wei
Zheng Zhang
Stephen Lin
B. Guo
ViT
455
21,439
0
25 Mar 2021
Rethinking Attention with Performers
Rethinking Attention with Performers
K. Choromanski
Valerii Likhosherstov
David Dohan
Xingyou Song
Andreea Gane
...
Afroz Mohiuddin
Lukasz Kaiser
David Belanger
Lucy J. Colwell
Adrian Weller
184
1,597
0
30 Sep 2020
Efficient Transformers: A Survey
Efficient Transformers: A Survey
Yi Tay
Mostafa Dehghani
Dara Bahri
Donald Metzler
VLM
156
1,124
0
14 Sep 2020
Big Bird: Transformers for Longer Sequences
Big Bird: Transformers for Longer Sequences
Manzil Zaheer
Guru Guruganesh
Kumar Avinava Dubey
Joshua Ainslie
Chris Alberti
...
Philip Pham
Anirudh Ravula
Qifan Wang
Li Yang
Amr Ahmed
VLM
546
2,098
0
28 Jul 2020
Transformers are RNNs: Fast Autoregressive Transformers with Linear
  Attention
Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention
Angelos Katharopoulos
Apoorv Vyas
Nikolaos Pappas
Franccois Fleuret
201
1,771
0
29 Jun 2020
Denoising Diffusion Probabilistic Models
Denoising Diffusion Probabilistic Models
Jonathan Ho
Ajay Jain
Pieter Abbeel
DiffM
650
18,276
0
19 Jun 2020
Linformer: Self-Attention with Linear Complexity
Linformer: Self-Attention with Linear Complexity
Sinong Wang
Belinda Z. Li
Madian Khabsa
Han Fang
Hao Ma
216
1,713
0
08 Jun 2020
Image Super-Resolution with Cross-Scale Non-Local Attention and
  Exhaustive Self-Exemplars Mining
Image Super-Resolution with Cross-Scale Non-Local Attention and Exhaustive Self-Exemplars Mining
Yiqun Mei
Yuchen Fan
Yuqian Zhou
Lichao Huang
Thomas S. Huang
Humphrey Shi
SupR
82
250
0
02 Jun 2020
End-to-End Object Detection with Transformers
End-to-End Object Detection with Transformers
Nicolas Carion
Francisco Massa
Gabriel Synnaeve
Nicolas Usunier
Alexander Kirillov
Sergey Zagoruyko
ViT3DVPINN
421
13,048
0
26 May 2020
Reformer: The Efficient Transformer
Reformer: The Efficient Transformer
Nikita Kitaev
Lukasz Kaiser
Anselm Levskaya
VLM
197
2,327
0
13 Jan 2020
Stand-Alone Self-Attention in Vision Models
Stand-Alone Self-Attention in Vision Models
Prajit Ramachandran
Niki Parmar
Ashish Vaswani
Irwan Bello
Anselm Levskaya
Jonathon Shlens
VLMSLRViT
98
1,215
0
13 Jun 2019
SCRAM: Spatially Coherent Randomized Attention Maps
SCRAM: Spatially Coherent Randomized Attention Maps
D. A. Calian
P. Roelants
Jacques Calì
B. Carr
K. Dubba
John E. Reid
Dell Zhang
31
2
0
24 May 2019
Generating Long Sequences with Sparse Transformers
Generating Long Sequences with Sparse Transformers
R. Child
Scott Gray
Alec Radford
Ilya Sutskever
129
1,908
0
23 Apr 2019
Neural Nearest Neighbors Networks
Neural Nearest Neighbors Networks
Tobias Plötz
Stefan Roth
70
340
0
30 Oct 2018
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLMSSLSSeg
1.8K
95,114
0
11 Oct 2018
Image Super-Resolution Using Very Deep Residual Channel Attention
  Networks
Image Super-Resolution Using Very Deep Residual Channel Attention Networks
Yulun Zhang
Kunpeng Li
Kai Li
Lichen Wang
Bineng Zhong
Y. Fu
SupR
103
4,330
0
08 Jul 2018
Non-Local Recurrent Network for Image Restoration
Non-Local Recurrent Network for Image Restoration
Ding Liu
Bihan Wen
Yuchen Fan
Chen Change Loy
Thomas S. Huang
SupR
217
631
0
07 Jun 2018
Self-Attention Generative Adversarial Networks
Self-Attention Generative Adversarial Networks
Han Zhang
Ian Goodfellow
Dimitris N. Metaxas
Augustus Odena
GAN
148
3,729
0
21 May 2018
Image Transformer
Image Transformer
Niki Parmar
Ashish Vaswani
Jakob Uszkoreit
Lukasz Kaiser
Noam M. Shazeer
Alexander Ku
Dustin Tran
ViT
138
1,680
0
15 Feb 2018
Shift-Net: Image Inpainting via Deep Feature Rearrangement
Shift-Net: Image Inpainting via Deep Feature Rearrangement
Zhaoyi Yan
Xiaoming Li
Mu Li
W. Zuo
Shiguang Shan
57
451
0
29 Jan 2018
Generative Image Inpainting with Contextual Attention
Generative Image Inpainting with Contextual Attention
Jiahui Yu
Zhe Lin
Jimei Yang
Xiaohui Shen
Xin Lu
Thomas S. Huang
GANDiffM
96
2,266
0
24 Jan 2018
Non-local Neural Networks
Non-local Neural Networks
Xinyu Wang
Ross B. Girshick
Abhinav Gupta
Kaiming He
OffRL
289
8,916
0
21 Nov 2017
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
719
132,199
0
12 Jun 2017
Context Encoders: Feature Learning by Inpainting
Context Encoders: Feature Learning by Inpainting
Deepak Pathak
Philipp Krahenbuhl
Jeff Donahue
Trevor Darrell
Alexei A. Efros
SSL
67
5,299
0
25 Apr 2016
Combining Markov Random Fields and Convolutional Neural Networks for
  Image Synthesis
Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis
Chuan Li
Michael Wand
78
770
0
18 Jan 2016
1