ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.11843
2
0

S-Crescendo: A Nested Transformer Weaving Framework for Scalable Nonlinear System in S-Domain Representation

17 May 2025
Junlang Huang
Hao Chen
Li Luo
Yong Cai
Lexin Zhang
Tianhao Ma
Yitian Zhang
Zhong Guan
ArXivPDFHTML
Abstract

Simulation of high-order nonlinear system requires extensive computational resources, especially in modern VLSI backend design where bifurcation-induced instability and chaos-like transient behaviors pose challenges. We present S-Crescendo - a nested transformer weaving framework that synergizes S-domain with neural operators for scalable time-domain prediction in high-order nonlinear networks, alleviating the computational bottlenecks of conventional solvers via Newton-Raphson method. By leveraging the partial-fraction decomposition of an n-th order transfer function into first-order modal terms with repeated poles and residues, our method bypasses the conventional Jacobian matrix-based iterations and efficiently reduces computational complexity from cubic O(n3)O(n^3)O(n3) to linear O(n)O(n)O(n).The proposed architecture seamlessly integrates an S-domain encoder with an attention-based correction operator to simultaneously isolate dominant response and adaptively capture higher-order non-linearities. Validated on order-1 to order-10 networks, our method achieves up to 0.99 test-set (R2R^2R2) accuracy against HSPICE golden waveforms and accelerates simulation by up to 18(X), providing a scalable, physics-aware framework for high-dimensional nonlinear modeling.

View on arXiv
@article{huang2025_2505.11843,
  title={ S-Crescendo: A Nested Transformer Weaving Framework for Scalable Nonlinear System in S-Domain Representation },
  author={ Junlang Huang and Hao Chen and Li Luo and Yong Cai and Lexin Zhang and Tianhao Ma and Yitian Zhang and Zhong Guan },
  journal={arXiv preprint arXiv:2505.11843},
  year={ 2025 }
}
Comments on this paper