ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.11450
  4. Cited By
Neural Network Inference on Mobile SoCs

Neural Network Inference on Mobile SoCs

24 August 2019
Siqi Wang
A. Pathania
T. Mitra
ArXivPDFHTML

Papers citing "Neural Network Inference on Mobile SoCs"

13 / 13 papers shown
Title
Optimizing DNN Inference on Multi-Accelerator SoCs at Training-time
Optimizing DNN Inference on Multi-Accelerator SoCs at Training-time
Matteo Risso
Luca Bompani
Daniele Jahier Pagliari
58
0
0
24 Feb 2025
Precision-aware Latency and Energy Balancing on Multi-Accelerator
  Platforms for DNN Inference
Precision-aware Latency and Energy Balancing on Multi-Accelerator Platforms for DNN Inference
Matteo Risso
Luca Bompani
G. M. Sarda
Luca Benini
Enrico Macii
Massimo Poncino
Marian Verhelst
Daniele Jahier Pagliari
38
4
0
08 Jun 2023
Comparative Study of Parameter Selection for Enhanced Edge Inference for
  a Multi-Output Regression model for Head Pose Estimation
Comparative Study of Parameter Selection for Enhanced Edge Inference for a Multi-Output Regression model for Head Pose Estimation
A. Lindamulage
N. Kodagoda
Shyam Reyal
Pradeepa Samarasinghe
P. Yogarajah
CVBM
21
0
0
28 Dec 2022
Accelerating CNN inference on long vector architectures via co-design
Accelerating CNN inference on long vector architectures via co-design
Sonia Rani Gupta
Nikela Papadopoulou
Miquel Pericàs
3DV
18
4
0
22 Dec 2022
NAWQ-SR: A Hybrid-Precision NPU Engine for Efficient On-Device
  Super-Resolution
NAWQ-SR: A Hybrid-Precision NPU Engine for Efficient On-Device Super-Resolution
Stylianos I. Venieris
Mario Almeida
Royson Lee
Nicholas D. Lane
SupR
33
4
0
15 Dec 2022
Lightweight and Flexible Deep Equilibrium Learning for CSI Feedback in
  FDD Massive MIMO
Lightweight and Flexible Deep Equilibrium Learning for CSI Feedback in FDD Massive MIMO
Yifan Ma
Wentao Yu
Xianghao Yu
Jun Zhang
Shenghui Song
Khaled B. Letaief
22
2
0
28 Nov 2022
Accelerating Neural Network Inference with Processing-in-DRAM: From the
  Edge to the Cloud
Accelerating Neural Network Inference with Processing-in-DRAM: From the Edge to the Cloud
Geraldo F. Oliveira
Juan Gómez Luna
Saugata Ghose
Amirali Boroumand
O. Mutlu
34
24
0
19 Sep 2022
Efficient Softmax Approximation for Deep Neural Networks with Attention
  Mechanism
Efficient Softmax Approximation for Deep Neural Networks with Attention Mechanism
Ihor Vasyltsov
Wooseok Chang
33
12
0
21 Nov 2021
Smart at what cost? Characterising Mobile Deep Neural Networks in the
  wild
Smart at what cost? Characterising Mobile Deep Neural Networks in the wild
Mario Almeida
Stefanos Laskaridis
Abhinav Mehrotra
Łukasz Dudziak
Ilias Leontiadis
Nicholas D. Lane
HAI
115
44
0
28 Sep 2021
FLASH: Fast Neural Architecture Search with Hardware Optimization
FLASH: Fast Neural Architecture Search with Hardware Optimization
Guihong Li
Sumit K. Mandal
Ümit Y. Ogras
R. Marculescu
31
20
0
01 Aug 2021
AutoFL: Enabling Heterogeneity-Aware Energy Efficient Federated Learning
AutoFL: Enabling Heterogeneity-Aware Energy Efficient Federated Learning
Young Geun Kim
Carole-Jean Wu
26
85
0
16 Jul 2021
A Comprehensive Survey on Hardware-Aware Neural Architecture Search
A Comprehensive Survey on Hardware-Aware Neural Architecture Search
Hadjer Benmeziane
Kaoutar El Maghraoui
Hamza Ouarnoughi
Smail Niar
Martin Wistuba
Naigang Wang
39
98
0
22 Jan 2021
SPINN: Synergistic Progressive Inference of Neural Networks over Device
  and Cloud
SPINN: Synergistic Progressive Inference of Neural Networks over Device and Cloud
Stefanos Laskaridis
Stylianos I. Venieris
Mario Almeida
Ilias Leontiadis
Nicholas D. Lane
30
266
0
14 Aug 2020
1