ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.02958
  4. Cited By
Automap: Towards Ergonomic Automated Parallelism for ML Models

Automap: Towards Ergonomic Automated Parallelism for ML Models

6 December 2021
Michael Schaarschmidt
Dominik Grewe
Dimitrios Vytiniotis
Adam Paszke
G. Schmid
Tamara Norman
James Molloy
Jonathan Godwin
Norman A. Rink
Vinod Nair
Dan Belov
    MoE
ArXiv (abs)PDFHTML

Papers citing "Automap: Towards Ergonomic Automated Parallelism for ML Models"

11 / 11 papers shown
Title
TOAST: Fast and scalable auto-partitioning based on principled static analysis
TOAST: Fast and scalable auto-partitioning based on principled static analysis
Sami Alabed
Dominik Grewe
Norman A. Rink
Masha Samsikova
Timur Sitdikov
Agnieszka Swietlik
Dimitrios Vytiniotis
Daniel Belov
75
0
0
20 Aug 2025
EinDecomp: Decomposition of Declaratively-Specified Machine Learning and
  Numerical Computations for Parallel Execution
EinDecomp: Decomposition of Declaratively-Specified Machine Learning and Numerical Computations for Parallel ExecutionProceedings of the VLDB Endowment (PVLDB), 2024
Daniel Bourgeois
Zhimin Ding
Dimitrije Jankov
Jiehui Li
Mahmoud Sleem
Yuxin Tang
Jiawen Yao
Xinyu Yao
Chris Jermaine
80
2
0
03 Oct 2024
Efficient Training of Large Language Models on Distributed
  Infrastructures: A Survey
Efficient Training of Large Language Models on Distributed Infrastructures: A Survey
Jiangfei Duan
Shuo Zhang
Zerui Wang
Lijuan Jiang
Wenwen Qu
...
Dahua Lin
Yonggang Wen
Xin Jin
Tianwei Zhang
Yang Liu
286
28
0
29 Jul 2024
PartIR: Composing SPMD Partitioning Strategies for Machine Learning
PartIR: Composing SPMD Partitioning Strategies for Machine LearningInternational Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), 2024
Sami Alabed
Daniel Belov
Bart Chrzaszcz
Juliana Franco
Dominik Grewe
...
Michael Schaarschmidt
Timur Sitdikov
Agnieszka Swietlik
Dimitrios Vytiniotis
Joel Wee
251
8
0
20 Jan 2024
UniAP: Unifying Inter- and Intra-Layer Automatic Parallelism by Mixed Integer Quadratic Programming
UniAP: Unifying Inter- and Intra-Layer Automatic Parallelism by Mixed Integer Quadratic ProgrammingComputer Vision and Pattern Recognition (CVPR), 2023
Hao Lin
Ke Wu
Jie Li
Jun Yu Li
Wu-Jun Li
333
4
0
31 Jul 2023
Chakra: Advancing Performance Benchmarking and Co-design using
  Standardized Execution Traces
Chakra: Advancing Performance Benchmarking and Co-design using Standardized Execution Traces
Srinivas Sridharan
Taekyung Heo
Louis Feng
Zhaodong Wang
M. Bergeron
...
Shengbao Zheng
Brian Coutinho
Saeed Rashidi
Changhai Man
T. Krishna
111
20
0
23 May 2023
Pex: Memory-efficient Microcontroller Deep Learning through Partial
  Execution
Pex: Memory-efficient Microcontroller Deep Learning through Partial Execution
Edgar Liberis
Nicholas D. Lane
258
4
0
30 Nov 2022
Automatic Discovery of Composite SPMD Partitioning Strategies in PartIR
Automatic Discovery of Composite SPMD Partitioning Strategies in PartIR
Sami Alabed
Dominik Grewe
Juliana Franco
Bart Chrzaszcz
Tom Natan
Tamara Norman
Norman A. Rink
Dimitrios Vytiniotis
Michael Schaarschmidt
MoE
99
2
0
07 Oct 2022
Learned Force Fields Are Ready For Ground State Catalyst Discovery
Learned Force Fields Are Ready For Ground State Catalyst Discovery
Michael Schaarschmidt
M. Rivière
A. Ganose
J. Spencer
Alex Gaunt
J. Kirkpatrick
Simon Axelrod
Peter W. Battaglia
Jonathan Godwin
142
14
0
26 Sep 2022
Pre-training via Denoising for Molecular Property Prediction
Pre-training via Denoising for Molecular Property PredictionInternational Conference on Learning Representations (ICLR), 2022
Sheheryar Zaidi
Michael Schaarschmidt
James Martens
Hyunjik Kim
Yee Whye Teh
Alvaro Sanchez-Gonzalez
Peter W. Battaglia
Razvan Pascanu
Jonathan Godwin
DiffMAI4CE
385
141
0
31 May 2022
Respecting causality is all you need for training physics-informed
  neural networks
Respecting causality is all you need for training physics-informed neural networks
Sizhuang He
Shyam Sankaran
P. Perdikaris
PINNCMLAI4CE
348
230
0
14 Mar 2022
1