ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.17262
  4. Cited By
QNet: A Quantum-native Sequence Encoder Architecture

QNet: A Quantum-native Sequence Encoder Architecture

31 October 2022
Wei-Yen Day
Hao-Sheng Chen
Min Sun
ArXivPDFHTML

Papers citing "QNet: A Quantum-native Sequence Encoder Architecture"

23 / 23 papers shown
Title
Quantum advantage in learning from experiments
Quantum advantage in learning from experiments
Hsin-Yuan Huang
Michael Broughton
Jordan S. Cotler
Sitan Chen
Jingkai Li
...
Hartmut Neven
Ryan Babbush
R. Kueng
J. Preskill
Jarrod R. McClean
50
479
0
01 Dec 2021
The Dawn of Quantum Natural Language Processing
The Dawn of Quantum Natural Language Processing
R. Sipio
Jia-Hong Huang
Samuel Yen-Chi Chen
Stefano Mangini
Marcel Worring
122
85
0
13 Oct 2021
Embodied BERT: A Transformer Model for Embodied, Language-guided Visual
  Task Completion
Embodied BERT: A Transformer Model for Embodied, Language-guided Visual Task Completion
Alessandro Suglia
Qiaozi Gao
Jesse Thomason
Govind Thattai
Gaurav Sukhatme
LM&Ro
96
77
0
10 Aug 2021
Pay Attention to MLPs
Pay Attention to MLPs
Hanxiao Liu
Zihang Dai
David R. So
Quoc V. Le
AI4CE
122
662
0
17 May 2021
FNet: Mixing Tokens with Fourier Transforms
FNet: Mixing Tokens with Fourier Transforms
James Lee-Thorp
Joshua Ainslie
Ilya Eckstein
Santiago Ontanon
99
530
0
09 May 2021
Do You Even Need Attention? A Stack of Feed-Forward Layers Does
  Surprisingly Well on ImageNet
Do You Even Need Attention? A Stack of Feed-Forward Layers Does Surprisingly Well on ImageNet
Luke Melas-Kyriazi
ViT
39
102
0
06 May 2021
MLP-Mixer: An all-MLP Architecture for Vision
MLP-Mixer: An all-MLP Architecture for Vision
Ilya O. Tolstikhin
N. Houlsby
Alexander Kolesnikov
Lucas Beyer
Xiaohua Zhai
...
Andreas Steiner
Daniel Keysers
Jakob Uszkoreit
Mario Lucic
Alexey Dosovitskiy
418
2,674
0
04 May 2021
An Image is Worth 16x16 Words: Transformers for Image Recognition at
  Scale
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy
Lucas Beyer
Alexander Kolesnikov
Dirk Weissenborn
Xiaohua Zhai
...
Matthias Minderer
G. Heigold
Sylvain Gelly
Jakob Uszkoreit
N. Houlsby
ViT
654
41,103
0
22 Oct 2020
TransModality: An End2End Fusion Method with Transformer for Multimodal
  Sentiment Analysis
TransModality: An End2End Fusion Method with Transformer for Multimodal Sentiment Analysis
Zilong Wang
Zhaohong Wan
Xiaojun Wan
54
121
0
07 Sep 2020
Quantum Long Short-Term Memory
Quantum Long Short-Term Memory
Samuel Yen-Chi Chen
Shinjae Yoo
Yao-Lung L. Fang
66
164
0
03 Sep 2020
Recurrent Quantum Neural Networks
Recurrent Quantum Neural Networks
Johannes Bausch
51
154
0
25 Jun 2020
Linformer: Self-Attention with Linear Complexity
Linformer: Self-Attention with Linear Complexity
Sinong Wang
Belinda Z. Li
Madian Khabsa
Han Fang
Hao Ma
213
1,706
0
08 Jun 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
774
42,055
0
28 May 2020
Conformer: Convolution-augmented Transformer for Speech Recognition
Conformer: Convolution-augmented Transformer for Speech Recognition
Anmol Gulati
James Qin
Chung-Cheng Chiu
Niki Parmar
Yu Zhang
...
Wei Han
Shibo Wang
Zhengdong Zhang
Yonghui Wu
Ruoming Pang
223
3,139
0
16 May 2020
EmpTransfo: A Multi-head Transformer Architecture for Creating
  Empathetic Dialog Systems
EmpTransfo: A Multi-head Transformer Architecture for Creating Empathetic Dialog Systems
Rohola Zandie
Mohammad H. Mahoor
95
44
0
05 Mar 2020
word2ket: Space-efficient Word Embeddings inspired by Quantum
  Entanglement
word2ket: Space-efficient Word Embeddings inspired by Quantum Entanglement
Ali (Aliakbar) Panahi
Seyran Saeedi
Tom Arodz
28
32
0
12 Nov 2019
Efficient Learning for Deep Quantum Neural Networks
Efficient Learning for Deep Quantum Neural Networks
Kerstin Beer
Dmytro Bondarenko
Terry Farrelly
T. Osborne
Robert Salzmann
Ramona Wolf
61
560
0
27 Feb 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
1.8K
94,891
0
11 Oct 2018
Tensor2Tensor for Neural Machine Translation
Tensor2Tensor for Neural Machine Translation
Ashish Vaswani
Samy Bengio
E. Brevdo
François Chollet
Aidan Gomez
...
Nal Kalchbrenner
Niki Parmar
Ryan Sepassi
Noam M. Shazeer
Jakob Uszkoreit
92
530
0
16 Mar 2018
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
701
131,652
0
12 Jun 2017
SGDR: Stochastic Gradient Descent with Warm Restarts
SGDR: Stochastic Gradient Descent with Warm Restarts
I. Loshchilov
Frank Hutter
ODL
330
8,130
0
13 Aug 2016
TensorFlow: A system for large-scale machine learning
TensorFlow: A system for large-scale machine learning
Martín Abadi
P. Barham
Jianmin Chen
Zhiwen Chen
Andy Davis
...
Vijay Vasudevan
Pete Warden
Martin Wicke
Yuan Yu
Xiaoqiang Zhang
GNN
AI4CE
433
18,361
0
27 May 2016
Learning Phrase Representations using RNN Encoder-Decoder for
  Statistical Machine Translation
Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation
Kyunghyun Cho
B. V. Merrienboer
Çağlar Gülçehre
Dzmitry Bahdanau
Fethi Bougares
Holger Schwenk
Yoshua Bengio
AIMat
1.0K
23,354
0
03 Jun 2014
1