ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.15570
  4. Cited By

ARWKV: Pretrain is not what we need, an RNN-Attention-Based Language Model Born from Transformer

28 January 2025
Lin Yueyu
Li Zhiyuan
Peter Yue
Liu Xiao
ArXivPDFHTML

Papers citing "ARWKV: Pretrain is not what we need, an RNN-Attention-Based Language Model Born from Transformer"

4 / 4 papers shown
Title
WuNeng: Hybrid State with Attention
WuNeng: Hybrid State with Attention
Liu Xiao
Li Zhiyuan
Lin Yueyu
116
0
0
27 Apr 2025
State Tuning: State-based Test-Time Scaling on RWKV-7
State Tuning: State-based Test-Time Scaling on RWKV-7
Liu Xiao
Li Zhiyuan
Lin Yueyu
33
0
0
07 Apr 2025
RWKVTTS: Yet another TTS based on RWKV-7
RWKVTTS: Yet another TTS based on RWKV-7
Lin Yueyu
Liu Xiao
44
0
0
04 Apr 2025
BlackGoose Rimer: Harnessing RWKV-7 as a Simple yet Superior Replacement for Transformers in Large-Scale Time Series Modeling
Li weile
Liu Xiao
55
1
0
08 Mar 2025
1