ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.19708
40
0

Harmonic LLMs are Trustworthy

30 April 2024
Nicholas S. Kersting
Mohammad Rahman
Suchismitha Vedala
Yang Wang
ArXivPDFHTML
Abstract

We introduce an intuitive method to test the robustness (stability and explainability) of any black-box LLM in real-time, based upon the local deviation from harmoniticity, denoted as γ\gammaγ. To the best of our knowledge this is the first completely model-agnostic and unsupervised method of measuring the robustness of any given response from an LLM, based upon the model itself conforming to a purely mathematical standard. We conduct human annotation experiments to show the positive correlation of γ\gammaγ with false or misleading answers, and demonstrate that following the gradient of γ\gammaγ in stochastic gradient ascent efficiently exposes adversarial prompts. Measuring γ\gammaγ across thousands of queries in popular LLMs (GPT-4, ChatGPT, Claude-2.1, Mixtral-8x7B, Smaug-72B, Llama2-7B, and MPT-7B) allows us to estimate the liklihood of wrong or hallucinatory answers automatically and quantitatively rank the reliability of these models in various objective domains (Web QA, TruthfulQA, and Programming QA). Across all models and domains tested, human ratings confirm that γ→0\gamma \to 0γ→0 indicates trustworthiness, and the low-γ\gammaγ leaders among these models are GPT-4, ChatGPT, and Smaug-72B.

View on arXiv
Comments on this paper