ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1609.07410
  4. Cited By
One-vs-Each Approximation to Softmax for Scalable Estimation of
  Probabilities

One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities

23 September 2016
Michalis K. Titsias
    UQCV
ArXivPDFHTML

Papers citing "One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities"

5 / 5 papers shown
Title
BlackOut: Speeding up Recurrent Neural Network Language Models With Very
  Large Vocabularies
BlackOut: Speeding up Recurrent Neural Network Language Models With Very Large Vocabularies
Shihao Ji
S.V.N. Vishwanathan
N. Satish
Michael J. Anderson
Pradeep Dubey
66
77
0
21 Nov 2015
Deep Networks With Large Output Spaces
Deep Networks With Large Output Spaces
Sudheendra Vijayanarasimhan
Jonathon Shlens
R. Monga
J. Yagnik
BDL
49
56
0
23 Dec 2014
Scalable Bayesian Modelling of Paired Symbols
Scalable Bayesian Modelling of Paired Symbols
Ulrich Paquet
Noam Koenigstein
Ole Winther
32
2
0
09 Sep 2014
Distributed Representations of Words and Phrases and their
  Compositionality
Distributed Representations of Words and Phrases and their Compositionality
Tomas Mikolov
Ilya Sutskever
Kai Chen
G. Corrado
J. Dean
NAI
OCL
365
33,500
0
16 Oct 2013
A Fast and Simple Algorithm for Training Neural Probabilistic Language
  Models
A Fast and Simple Algorithm for Training Neural Probabilistic Language Models
A. Mnih
Yee Whye Teh
155
578
0
27 Jun 2012
1