ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.17771
161
1
v1v2v3v4 (latest)

Banyan: Improved Representation Learning with Explicit Structure

25 July 2024
Mattia Opper
N. Siddharth
ArXiv (abs)PDFHTML
Main:8 Pages
5 Figures
Bibliography:6 Pages
7 Tables
Appendix:1 Pages
Abstract

We present Banyan, a model that efficiently learns semantic representations by leveraging explicit hierarchical structure. While transformers excel at scale, they struggle in low-resource settings. Conversely recent structured models have shown promise as efficient learners, but lack performance. Banyan bridges this gap with two key innovations: an entangled hierarchical tree structure and diagonalized message passing, enabling it to outperform larger transformer models with just 14 non-embedding parameters. It excels in low-resource settings, offering a viable alternative for under-represented languages and highlighting its potential for efficient, interpretable NLP in resource-constrained environments.

View on arXiv
@article{opper2025_2407.17771,
  title={ Banyan: Improved Representation Learning with Explicit Structure },
  author={ Mattia Opper and N. Siddharth },
  journal={arXiv preprint arXiv:2407.17771},
  year={ 2025 }
}
Comments on this paper