ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1412.7479
  4. Cited By
Deep Networks With Large Output Spaces

Deep Networks With Large Output Spaces

23 December 2014
Sudheendra Vijayanarasimhan
Jonathon Shlens
R. Monga
J. Yagnik
    BDL
ArXivPDFHTML

Papers citing "Deep Networks With Large Output Spaces"

9 / 9 papers shown
Title
ImageNet-21K Pretraining for the Masses
ImageNet-21K Pretraining for the Masses
T. Ridnik
Emanuel Ben-Baruch
Asaf Noy
Lihi Zelnik-Manor
SSeg
VLM
CLIP
187
690
0
22 Apr 2021
Sampled Softmax with Random Fourier Features
Sampled Softmax with Random Fourier Features
A. S. Rawat
Jiecao Chen
Felix X. Yu
A. Suresh
Sanjiv Kumar
39
55
0
24 Jul 2019
A no-regret generalization of hierarchical softmax to extreme
  multi-label classification
A no-regret generalization of hierarchical softmax to extreme multi-label classification
Marek Wydmuch
Kalina Jasinska
Mikhail Kuznetsov
R. Busa-Fekete
Krzysztof Dembczyñski
29
100
0
27 Oct 2018
Beyond One-hot Encoding: lower dimensional target embedding
Beyond One-hot Encoding: lower dimensional target embedding
Pau Rodríguez López
Miguel Angel Bautista
Jordi Gonzalez
Sergio Escalera
11
335
0
28 Jun 2018
Improving Negative Sampling for Word Representation using Self-embedded
  Features
Improving Negative Sampling for Word Representation using Self-embedded Features
Long Chen
Fajie Yuan
J. Jose
Weinan Zhang
SSL
29
43
0
26 Oct 2017
Exact gradient updates in time independent of output size for the
  spherical loss family
Exact gradient updates in time independent of output size for the spherical loss family
Pascal Vincent
A. D. Brébisson
Xavier Bouthillier
13
2
0
26 Jun 2016
Strategies for Training Large Vocabulary Neural Language Models
Strategies for Training Large Vocabulary Neural Language Models
Welin Chen
David Grangier
Michael Auli
VLM
24
139
0
15 Dec 2015
BlackOut: Speeding up Recurrent Neural Network Language Models With Very
  Large Vocabularies
BlackOut: Speeding up Recurrent Neural Network Language Models With Very Large Vocabularies
Shihao Ji
S.V.N. Vishwanathan
N. Satish
Michael J. Anderson
Pradeep Dubey
44
77
0
21 Nov 2015
Efficient Exact Gradient Update for training Deep Networks with Very
  Large Sparse Targets
Efficient Exact Gradient Update for training Deep Networks with Very Large Sparse Targets
Pascal Vincent
A. D. Brébisson
Xavier Bouthillier
34
49
0
22 Dec 2014
1