ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.00769
  4. Cited By
Improving Compositionality of Neural Networks by Decoding
  Representations to Inputs

Improving Compositionality of Neural Networks by Decoding Representations to Inputs

1 June 2021
Mike Wu
Noah D. Goodman
Stefano Ermon
    AI4CE
ArXivPDFHTML

Papers citing "Improving Compositionality of Neural Networks by Decoding Representations to Inputs"

4 / 4 papers shown
Title
Constraining Representations Yields Models That Know What They Don't
  Know
Constraining Representations Yields Models That Know What They Don't Know
João Monteiro
Pau Rodríguez López
Pierre-Andre Noel
I. Laradji
David Vazquez
AAML
44
0
0
30 Aug 2022
How to Reuse and Compose Knowledge for a Lifetime of Tasks: A Survey on
  Continual Learning and Functional Composition
How to Reuse and Compose Knowledge for a Lifetime of Tasks: A Survey on Continual Learning and Functional Composition
Jorge Armando Mendez Mendez
Eric Eaton
KELM
CLL
32
27
0
15 Jul 2022
A Survey on Bias and Fairness in Machine Learning
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDa
FaML
335
4,223
0
23 Aug 2019
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
285
9,145
0
06 Jun 2015
1