ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.00226
  4. Cited By
Transformers are Provably Optimal In-context Estimators for Wireless Communications
v1v2v3v4 (latest)

Transformers are Provably Optimal In-context Estimators for Wireless Communications

1 November 2023
Vishnu Teja Kunde
Vicram Rajagopalan
Chandra Shekhara Kaushik Valmeekam
Krishna R. Narayanan
S. Shakkottai
D. Kalathil
J. Chamberland
ArXiv (abs)PDFHTML

Papers citing "Transformers are Provably Optimal In-context Estimators for Wireless Communications"

33 / 33 papers shown
Title
In-Context Learning for Gradient-Free Receiver Adaptation: Principles, Applications, and Theory
In-Context Learning for Gradient-Free Receiver Adaptation: Principles, Applications, and Theory
Matteo Zecchin
Tomer Raviv
D. Kalathil
Krishna R. Narayanan
Nir Shlezinger
Osvaldo Simeone
23
0
0
18 Jun 2025
Analogical Learning for Cross-Scenario Generalization: Framework and Application to Intelligent Localization
Analogical Learning for Cross-Scenario Generalization: Framework and Application to Intelligent Localization
Zirui Chen
Zhaoyang Zhang
Ziqing Xing
Ridong Li
Zhaohui Yang
Richeng Jin
Chongwen Huang
YuZhi Yang
Mérouane Debbah
91
1
0
09 Apr 2025
Decision Feedback In-Context Learning for Wireless Symbol Detection
Decision Feedback In-Context Learning for Wireless Symbol Detection
Li Fan
Jing Yang
Jing Yang
Cong Shen
136
0
0
20 Mar 2025
DenoMAE2.0: Improving Denoising Masked Autoencoders by Classifying Local Patches
DenoMAE2.0: Improving Denoising Masked Autoencoders by Classifying Local Patches
Atik Faysal
Mohammad Rostami
Taha Boushine
Reihaneh Gh. Roshan
Huaxia Wang
Nikhil Muralidhar
68
1
0
25 Feb 2025
Decision Feedback In-Context Symbol Detection over Block-Fading Channels
Decision Feedback In-Context Symbol Detection over Block-Fading Channels
Li Fan
Jing Yang
Cong Shen
127
1
0
12 Nov 2024
In-Context Learning with Representations: Contextual Generalization of
  Trained Transformers
In-Context Learning with Representations: Contextual Generalization of Trained Transformers
Tong Yang
Yu Huang
Yingbin Liang
Yuejie Chi
MLT
115
12
0
19 Aug 2024
Pretraining Decision Transformers with Reward Prediction for In-Context Multi-task Structured Bandit Learning
Pretraining Decision Transformers with Reward Prediction for In-Context Multi-task Structured Bandit Learning
Subhojyoti Mukherjee
Josiah P. Hanna
Qiaomin Xie
Robert Nowak
270
2
0
07 Jun 2024
Cell-Free Multi-User MIMO Equalization via In-Context Learning
Cell-Free Multi-User MIMO Equalization via In-Context Learning
Matteo Zecchin
Kai Yu
Osvaldo Simeone
88
9
0
08 Apr 2024
Training Dynamics of Multi-Head Softmax Attention for In-Context
  Learning: Emergence, Convergence, and Optimality
Training Dynamics of Multi-Head Softmax Attention for In-Context Learning: Emergence, Convergence, and Optimality
Siyu Chen
Heejune Sheen
Tianhao Wang
Zhuoran Yang
MLT
93
46
0
29 Feb 2024
In-Context Learning with Transformers: Softmax Attention Adapts to
  Function Lipschitzness
In-Context Learning with Transformers: Softmax Attention Adapts to Function Lipschitzness
Liam Collins
Advait Parulekar
Aryan Mokhtari
Sujay Sanghavi
Sanjay Shakkottai
MLT
55
15
0
18 Feb 2024
In-Context Learning for MIMO Equalization Using Transformer-Based
  Sequence Models
In-Context Learning for MIMO Equalization Using Transformer-Based Sequence Models
Matteo Zecchin
Kai Yu
Osvaldo Simeone
93
12
0
10 Nov 2023
How Do Transformers Learn In-Context Beyond Simple Functions? A Case
  Study on Learning with Representations
How Do Transformers Learn In-Context Beyond Simple Functions? A Case Study on Learning with Representations
Tianyu Guo
Wei Hu
Song Mei
Huan Wang
Caiming Xiong
Silvio Savarese
Yu Bai
109
60
0
16 Oct 2023
How Many Pretraining Tasks Are Needed for In-Context Learning of Linear
  Regression?
How Many Pretraining Tasks Are Needed for In-Context Learning of Linear Regression?
Jingfeng Wu
Difan Zou
Zixiang Chen
Vladimir Braverman
Quanquan Gu
Peter L. Bartlett
209
61
0
12 Oct 2023
In-Context Convergence of Transformers
In-Context Convergence of Transformers
Yu Huang
Yuan Cheng
Yingbin Liang
MLT
121
73
0
08 Oct 2023
One Step of Gradient Descent is Provably the Optimal In-Context Learner
  with One Layer of Linear Self-Attention
One Step of Gradient Descent is Provably the Optimal In-Context Learner with One Layer of Linear Self-Attention
Arvind V. Mahankali
Tatsunori B. Hashimoto
Tengyu Ma
MLT
90
102
0
07 Jul 2023
Supervised Pretraining Can Learn In-Context Reinforcement Learning
Supervised Pretraining Can Learn In-Context Reinforcement Learning
Jonathan Lee
Annie Xie
Aldo Pacchiano
Yash Chandak
Chelsea Finn
Ofir Nachum
Emma Brunskill
OffRL
126
86
0
26 Jun 2023
Trained Transformers Learn Linear Models In-Context
Trained Transformers Learn Linear Models In-Context
Ruiqi Zhang
Spencer Frei
Peter L. Bartlett
114
207
0
16 Jun 2023
In-Context Learning through the Bayesian Prism
In-Context Learning through the Bayesian Prism
Madhuri Panwar
Kabir Ahuja
Navin Goyal
BDL
93
48
0
08 Jun 2023
Transformers learn to implement preconditioned gradient descent for
  in-context learning
Transformers learn to implement preconditioned gradient descent for in-context learning
Kwangjun Ahn
Xiang Cheng
Hadi Daneshmand
S. Sra
ODL
104
176
0
01 Jun 2023
Transformers learn in-context by gradient descent
Transformers learn in-context by gradient descent
J. Oswald
Eyvind Niklasson
E. Randazzo
João Sacramento
A. Mordvintsev
A. Zhmoginov
Max Vladymyrov
MLT
181
497
0
15 Dec 2022
What learning algorithm is in-context learning? Investigations with
  linear models
What learning algorithm is in-context learning? Investigations with linear models
Ekin Akyürek
Dale Schuurmans
Jacob Andreas
Tengyu Ma
Denny Zhou
165
493
0
28 Nov 2022
What Can Transformers Learn In-Context? A Case Study of Simple Function
  Classes
What Can Transformers Learn In-Context? A Case Study of Simple Function Classes
Shivam Garg
Dimitris Tsipras
Percy Liang
Gregory Valiant
191
514
0
01 Aug 2022
Variational Autoencoder Leveraged MMSE Channel Estimation
Variational Autoencoder Leveraged MMSE Channel Estimation
Michael Baur
B. Fesl
M. Koller
Wolfgang Utschick
66
20
0
11 May 2022
Data Distributional Properties Drive Emergent In-Context Learning in
  Transformers
Data Distributional Properties Drive Emergent In-Context Learning in Transformers
Stephanie C. Y. Chan
Adam Santoro
Andrew Kyle Lampinen
Jane X. Wang
Aaditya K. Singh
Pierre Harvey Richemond
J. Mcclelland
Felix Hill
245
266
0
22 Apr 2022
Rethinking the Role of Demonstrations: What Makes In-Context Learning
  Work?
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
Sewon Min
Xinxi Lyu
Ari Holtzman
Mikel Artetxe
M. Lewis
Hannaneh Hajishirzi
Luke Zettlemoyer
LLMAGLRM
200
1,507
0
25 Feb 2022
An Explanation of In-context Learning as Implicit Bayesian Inference
An Explanation of In-context Learning as Implicit Bayesian Inference
Sang Michael Xie
Aditi Raghunathan
Percy Liang
Tengyu Ma
ReLMBDLVPVLMLRM
338
764
0
03 Nov 2021
MetaICL: Learning to Learn In Context
MetaICL: Learning to Learn In Context
Sewon Min
M. Lewis
Luke Zettlemoyer
Hannaneh Hajishirzi
LRM
264
493
0
29 Oct 2021
End-to-end Learning for OFDM: From Neural Receivers to Pilotless
  Communication
End-to-end Learning for OFDM: From Neural Receivers to Pilotless Communication
Fayçal Ait Aoudia
J. Hoydis
116
94
0
11 Sep 2020
Deep Learning Methods for Solving Linear Inverse Problems: Research
  Directions and Paradigms
Deep Learning Methods for Solving Linear Inverse Problems: Research Directions and Paradigms
Yanna Bai
Wei Chen
Jie Chen
Weisi Guo
123
70
0
27 Jul 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
1.3K
42,754
0
28 May 2020
Unsupervised Linear and Nonlinear Channel Equalization and Decoding
  using Variational Autoencoders
Unsupervised Linear and Nonlinear Channel Equalization and Decoding using Variational Autoencoders
Avi Caciularu
D. Burshtein
78
50
0
21 May 2019
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
1.1K
133,599
0
12 Jun 2017
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
2.7K
150,708
0
22 Dec 2014
1