ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.16386
113
0

Omni TM-AE: A Scalable and Interpretable Embedding Model Using the Full Tsetlin Machine State Space

22 May 2025
Ahmed K. Kadhim
Lei Jiao
Rishad Shafik
Ole-Christoffer Granmo
ArXiv (abs)PDFHTML
Main:9 Pages
6 Figures
Bibliography:2 Pages
3 Tables
Appendix:2 Pages
Abstract

The increasing complexity of large-scale language models has amplified concerns regarding their interpretability and reusability. While traditional embedding models like Word2Vec and GloVe offer scalability, they lack transparency and often behave as black boxes. Conversely, interpretable models such as the Tsetlin Machine (TM) have shown promise in constructing explainable learning systems, though they previously faced limitations in scalability and reusability. In this paper, we introduce Omni Tsetlin Machine AutoEncoder (Omni TM-AE), a novel embedding model that fully exploits the information contained in the TM's state matrix, including literals previously excluded from clause formation. This method enables the construction of reusable, interpretable embeddings through a single training phase. Extensive experiments across semantic similarity, sentiment classification, and document clustering tasks show that Omni TM-AE performs competitively with and often surpasses mainstream embedding models. These results demonstrate that it is possible to balance performance, scalability, and interpretability in modern Natural Language Processing (NLP) systems without resorting to opaque architectures.

View on arXiv
@article{kadhim2025_2505.16386,
  title={ Omni TM-AE: A Scalable and Interpretable Embedding Model Using the Full Tsetlin Machine State Space },
  author={ Ahmed K. Kadhim and Lei Jiao and Rishad Shafik and Ole-Christoffer Granmo },
  journal={arXiv preprint arXiv:2505.16386},
  year={ 2025 }
}
Comments on this paper