ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.16495
  4. Cited By
On-device AI: Quantization-aware Training of Transformers in Time-Series

On-device AI: Quantization-aware Training of Transformers in Time-Series

29 August 2024
Tianheng Ling
Gregor Schiele
    AI4TS
ArXivPDFHTML

Papers citing "On-device AI: Quantization-aware Training of Transformers in Time-Series"

4 / 4 papers shown
Title
Enhancing Energy-efficiency by Solving the Throughput Bottleneck of LSTM
  Cells for Embedded FPGAs
Enhancing Energy-efficiency by Solving the Throughput Bottleneck of LSTM Cells for Embedded FPGAs
Chao Qian
Tianheng Ling
Gregor Schiele
29
14
0
04 Oct 2023
A White Paper on Neural Network Quantization
A White Paper on Neural Network Quantization
Markus Nagel
Marios Fournarakis
Rana Ali Amjad
Yelysei Bondarenko
M. V. Baalen
Tijmen Blankevoort
MQ
67
539
0
15 Jun 2021
Deep Transformer Models for Time Series Forecasting: The Influenza
  Prevalence Case
Deep Transformer Models for Time Series Forecasting: The Influenza Prevalence Case
Neo Wu
Bradley Green
X. Ben
S. O’Banion
AI4TS
60
454
0
23 Jan 2020
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
665
131,414
0
12 Jun 2017
1