ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.02344
502
2
v1v2 (latest)

Seq-VCR: Preventing Collapse in Intermediate Transformer Representations for Enhanced Reasoning

4 November 2024
Md Rifat Arefin
G. Subbaraj
Nicolas Angelard-Gontier
Yann LeCun
Irina Rish
Ravid Shwartz-Ziv
C. Pal
    LRM
ArXiv (abs)PDFHTML
Abstract

Decoder-only Transformers often struggle with complex reasoning tasks, particularly arithmetic reasoning requiring multiple sequential operations. In this work, we identify representation collapse in the model's intermediate layers as a key factor limiting their reasoning capabilities. To address this, we propose Sequential Variance-Covariance Regularization (Seq-VCR), which enhances the entropy of intermediate representations and prevents collapse. Combined with dummy pause tokens as substitutes for chain-of-thought (CoT) tokens, our method significantly improves performance in arithmetic reasoning problems. In the challenging 5×55 \times 55×5 integer multiplication task, our approach achieves 99.5%99.5\%99.5% exact match accuracy, outperforming models of the same size (which yield 0%0\%0% accuracy) and GPT-4 with five-shot CoT prompting (44%44\%44%). We also demonstrate superior results on arithmetic expression and longest increasing subsequence (LIS) datasets. Our findings highlight the importance of preventing intermediate layer representation collapse to enhance the reasoning capabilities of Transformers and show that Seq-VCR offers an effective solution without requiring explicit CoT supervision.

View on arXiv
@article{arefin2025_2411.02344,
  title={ Seq-VCR: Preventing Collapse in Intermediate Transformer Representations for Enhanced Reasoning },
  author={ Md Rifat Arefin and Gopeshh Subbaraj and Nicolas Gontier and Yann LeCun and Irina Rish and Ravid Shwartz-Ziv and Christopher Pal },
  journal={arXiv preprint arXiv:2411.02344},
  year={ 2025 }
}
Comments on this paper