ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.08598
  4. Cited By
MotherNet: Fast Training and Inference via Hyper-Network Transformers
v1v2 (latest)

MotherNet: Fast Training and Inference via Hyper-Network Transformers

14 December 2023
Andreas Müller
Carlo Curino
Raghu Ramakrishnan
    LMTD
ArXiv (abs)PDFHTML

Papers citing "MotherNet: Fast Training and Inference via Hyper-Network Transformers"

4 / 4 papers shown
Title
Position: The Future of Bayesian Prediction Is Prior-Fitted
Position: The Future of Bayesian Prediction Is Prior-Fitted
Samuel G. Müller
Arik Reuter
Noah Hollmann
David Rügamer
Frank Hutter
32
0
0
29 May 2025
Tabular Embeddings for Tables with Bi-Dimensional Hierarchical Metadata and Nesting
Tabular Embeddings for Tables with Bi-Dimensional Hierarchical Metadata and Nesting
Gyanendra Shrestha
Chutain Jiang
Sai Akula
Vivek Yannam
Anna Pyayt
Michael Gubanov
LMTD
161
0
0
20 Feb 2025
EquiTabPFN: A Target-Permutation Equivariant Prior Fitted Networks
EquiTabPFN: A Target-Permutation Equivariant Prior Fitted Networks
Michael Arbel
David Salinas
Frank Hutter
135
3
0
10 Feb 2025
TabICL: A Tabular Foundation Model for In-Context Learning on Large Data
TabICL: A Tabular Foundation Model for In-Context Learning on Large Data
Jingang Qu
David Holzmüller
Gaël Varoquaux
Marine Le Morvan
LMTD
287
16
0
08 Feb 2025
1