ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.18471
  4. Cited By
Invisible Tokens, Visible Bills: The Urgent Need to Audit Hidden Operations in Opaque LLM Services

Invisible Tokens, Visible Bills: The Urgent Need to Audit Hidden Operations in Opaque LLM Services

24 May 2025
Guoheng Sun
Ziyao Wang
Xuandong Zhao
Bowei Tian
Zheyu Shen
Yexiao He
Jinming Xing
Ang Li
ArXiv (abs)PDFHTML

Papers citing "Invisible Tokens, Visible Bills: The Urgent Need to Audit Hidden Operations in Opaque LLM Services"

21 / 21 papers shown
Title
Why and How LLMs Hallucinate: Connecting the Dots with Subsequence Associations
Why and How LLMs Hallucinate: Connecting the Dots with Subsequence Associations
Yiyou Sun
Y. Gai
Lijie Chen
Abhilasha Ravichander
Yejin Choi
Basel Alomair
HILM
98
2
0
17 Apr 2025
Antidistillation Sampling
Antidistillation Sampling
Yash Savani
Asher Trockman
Zhili Feng
Avi Schwarzschild
Alexander Robey
Marc Finzi
J. Zico Kolter
99
3
0
17 Apr 2025
Are You Getting What You Pay For? Auditing Model Substitution in LLM APIs
Are You Getting What You Pay For? Auditing Model Substitution in LLM APIs
Will Cai
Tianneng Shi
Xuandong Zhao
Dawn Song
64
6
0
07 Apr 2025
Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models
Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models
Yang Sui
Yu-Neng Chuang
Guanchu Wang
Jiamu Zhang
Tianyi Zhang
...
Hongyi Liu
Andrew Wen
Shaochen
Zhong
Hanjie Chen
OffRLReLMLRM
191
100
0
20 Mar 2025
PlanGenLLMs: A Modern Survey of LLM Planning Capabilities
PlanGenLLMs: A Modern Survey of LLM Planning Capabilities
Hui Wei
Zihao Zhang
Shenghua He
Tian Xia
Shijia Pan
Fei Liu
142
9
0
16 Feb 2025
Commercial LLM Agents Are Already Vulnerable to Simple Yet Dangerous Attacks
Commercial LLM Agents Are Already Vulnerable to Simple Yet Dangerous Attacks
Ang Li
Yin Zhou
Vethavikashini Chithrra Raghuram
Tom Goldstein
Micah Goldblum
AAML
164
15
0
12 Feb 2025
DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning
DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning
DeepSeek-AI
Daya Guo
Dejian Yang
Haowei Zhang
Junxiao Song
...
Shiyu Wang
S. Yu
Shunfeng Zhou
Shuting Pan
S.S. Li
ReLMVLMOffRLAI4TSLRM
380
1,970
0
22 Jan 2025
Multi-Agent Collaboration Mechanisms: A Survey of LLMs
Multi-Agent Collaboration Mechanisms: A Survey of LLMs
Khanh-Tung Tran
Dung Dao
Minh-Duong Nguyen
Quoc-Viet Pham
Barry O'Sullivan
Hoang D. Nguyen
LLMAG
136
56
0
10 Jan 2025
GPT-4o System Card
GPT-4o System Card
OpenAI OpenAI
:
Aaron Hurst
Adam Lerer
Adam P. Goucher
...
Yuchen He
Yuchen Zhang
Yujia Jin
Yunxing Dai
Yury Malkov
MLLM
204
1,019
0
25 Oct 2024
ToolSandbox: A Stateful, Conversational, Interactive Evaluation Benchmark for LLM Tool Use Capabilities
ToolSandbox: A Stateful, Conversational, Interactive Evaluation Benchmark for LLM Tool Use Capabilities
Jiarui Lu
Thomas Holleis
Yizhe Zhang
Bernhard Aumayer
Feng Nan
...
Shen Ma
Mengyu Li
Guoli Yin
Zirui Wang
Ruoming Pang
LLMAGELM
92
39
0
08 Aug 2024
Great, Now Write an Article About That: The Crescendo Multi-Turn LLM Jailbreak Attack
Great, Now Write an Article About That: The Crescendo Multi-Turn LLM Jailbreak Attack
M. Russinovich
Ahmed Salem
Ronen Eldan
106
98
0
02 Apr 2024
Stealing Part of a Production Language Model
Stealing Part of a Production Language Model
Nicholas Carlini
Daniel Paleka
Krishnamurthy Dvijotham
Thomas Steinke
Jonathan Hayase
...
Arthur Conmy
Itay Yona
Eric Wallace
David Rolnick
Florian Tramèr
MLAUAAML
58
84
0
11 Mar 2024
Teach LLMs to Phish: Stealing Private Information from Language Models
Teach LLMs to Phish: Stealing Private Information from Language Models
Ashwinee Panda
Christopher A. Choquette-Choo
Zhengming Zhang
Yaoqing Yang
Prateek Mittal
PILM
106
26
0
01 Mar 2024
Large Language Model-based Human-Agent Collaboration for Complex Task
  Solving
Large Language Model-based Human-Agent Collaboration for Complex Task Solving
Xueyang Feng
Zhiyuan Chen
Yujia Qin
Yankai Lin
Xu Chen
Zhiyuan Liu
Ji-Rong Wen
LLMAG
90
24
0
20 Feb 2024
Planning, Creation, Usage: Benchmarking LLMs for Comprehensive Tool
  Utilization in Real-World Complex Scenarios
Planning, Creation, Usage: Benchmarking LLMs for Comprehensive Tool Utilization in Real-World Complex Scenarios
Shijue Huang
Wanjun Zhong
Jianqiao Lu
Qi Zhu
Jiahui Gao
...
Yasheng Wang
Lifeng Shang
Xin Jiang
Ruifeng Xu
Qun Liu
LLMAG
65
38
0
30 Jan 2024
The What, Why, and How of Context Length Extension Techniques in Large
  Language Models -- A Detailed Survey
The What, Why, and How of Context Length Extension Techniques in Large Language Models -- A Detailed Survey
Saurav Pawar
S.M. Towhidul Islam Tonmoy
S. M. M. Zaman
Vinija Jain
Aman Chadha
Amitava Das
59
29
0
15 Jan 2024
TaskLAMA: Probing the Complex Task Understanding of Language Models
TaskLAMA: Probing the Complex Task Understanding of Language Models
Quan Yuan
Mehran Kazemi
Xinyuan Xu
Isaac Noble
Vaiva Imbrasaite
Deepak Ramachandran
LRM
48
12
0
29 Aug 2023
Provable Robust Watermarking for AI-Generated Text
Provable Robust Watermarking for AI-Generated Text
Xuandong Zhao
P. Ananth
Lei Li
Yu-Xiang Wang
WaLM
101
184
0
30 Jun 2023
A Watermark for Large Language Models
A Watermark for Large Language Models
John Kirchenbauer
Jonas Geiping
Yuxin Wen
Jonathan Katz
Ian Miers
Tom Goldstein
VLMWaLM
106
504
0
24 Jan 2023
LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large
  Language Models
LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models
Chan Hee Song
Jiaman Wu
Clay Washington
Brian M Sadler
Wei-Lun Chao
Yu-Chuan Su
LLMAGLM&Ro
135
418
0
08 Dec 2022
Distillation-Resistant Watermarking for Model Protection in NLP
Distillation-Resistant Watermarking for Model Protection in NLP
Xuandong Zhao
Lei Li
Yu-Xiang Wang
WaLM
123
20
0
07 Oct 2022
1