ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.13608
15
0

Assessing the Limits of In-Context Learning beyond Functions using Partially Ordered Relation

16 June 2025
Debanjan Dutta
Faizanuddin Ansari
Swagatam Das
ArXiv (abs)PDFHTML
Main:9 Pages
15 Figures
Bibliography:3 Pages
2 Tables
Appendix:10 Pages
Abstract

Generating rational and generally accurate responses to tasks, often accompanied by example demonstrations, highlights Large Language Model's (LLM's) remarkable In-Context Learning (ICL) capabilities without requiring updates to the model's parameter space. Despite having an ongoing exploration focused on the inference from a document-level concept, its behavior in learning well-defined functions or relations in context needs a careful investigation. In this article, we present the performance of ICL on partially ordered relation by introducing the notion of inductively increasing complexity in prompts. In most cases, the saturated performance of the chosen metric indicates that while ICL offers some benefits, its effectiveness remains constrained as we increase the complexity in the prompts even in presence of sufficient demonstrative examples. The behavior is evident from our empirical findings and has further been theoretically justified in term of its implicit optimization process. The code is available \href{this https URL}{here}.

View on arXiv
@article{dutta2025_2506.13608,
  title={ Assessing the Limits of In-Context Learning beyond Functions using Partially Ordered Relation },
  author={ Debanjan Dutta and Faizanuddin Ansari and Swagatam Das },
  journal={arXiv preprint arXiv:2506.13608},
  year={ 2025 }
}
Comments on this paper