ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.00948
32
18

Large Linguistic Models: Investigating LLMs' metalinguistic abilities

1 May 2023
Gašper Beguš
Maksymilian Dąbkowski
Ryan Rhodes
    LRM
ArXivPDFHTML
Abstract

The performance of large language models (LLMs) has recently improved to the point where models can perform well on many language tasks. We show here that--for the first time--the models can also generate valid metalinguistic analyses of language data. We outline a research program where the behavioral interpretability of LLMs on these tasks is tested via prompting. LLMs are trained primarily on text--as such, evaluating their metalinguistic abilities improves our understanding of their general capabilities and sheds new light on theoretical models in linguistics. We show that OpenAI's (2024) o1 vastly outperforms other models on tasks involving drawing syntactic trees and phonological generalization. We speculate that OpenAI o1's unique advantage over other models may result from the model's chain-of-thought mechanism, which mimics the structure of human reasoning used in complex cognitive tasks, such as linguistic analysis.

View on arXiv
@article{beguš2025_2305.00948,
  title={ Large Linguistic Models: Investigating LLMs' metalinguistic abilities },
  author={ Gašper Beguš and Maksymilian Dąbkowski and Ryan Rhodes },
  journal={arXiv preprint arXiv:2305.00948},
  year={ 2025 }
}
Comments on this paper