ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.18887
  4. Cited By
Embedding And Clustering Your Data Can Improve Contrastive Pretraining

Embedding And Clustering Your Data Can Improve Contrastive Pretraining

26 July 2024
Luke Merrick
ArXiv (abs)PDFHTML

Papers citing "Embedding And Clustering Your Data Can Improve Contrastive Pretraining"

5 / 5 papers shown
Title
Stronger Baselines for Retrieval-Augmented Generation with Long-Context Language Models
Stronger Baselines for Retrieval-Augmented Generation with Long-Context Language Models
Alex Laitenberger
Christopher D. Manning
Nelson F. Liu
RALM
67
0
0
04 Jun 2025
DisastIR: A Comprehensive Information Retrieval Benchmark for Disaster Management
DisastIR: A Comprehensive Information Retrieval Benchmark for Disaster Management
Kai Yin
Xiangjue Dong
Chengkai Liu
Lipai Huang
Yiming Xiao
Zhewei Liu
Ali Mostafavi
James Caverlee
93
0
0
20 May 2025
Plan-and-Refine: Diverse and Comprehensive Retrieval-Augmented Generation
Plan-and-Refine: Diverse and Comprehensive Retrieval-Augmented Generation
Alireza Salemi
Chris Samarinas
Hamed Zamani
78
0
0
10 Apr 2025
Improved Large Language Model Jailbreak Detection via Pretrained
  Embeddings
Improved Large Language Model Jailbreak Detection via Pretrained Embeddings
Erick Galinkin
Martin Sablotny
116
3
0
02 Dec 2024
OnlySportsLM: Optimizing Sports-Domain Language Models with SOTA
  Performance under Billion Parameters
OnlySportsLM: Optimizing Sports-Domain Language Models with SOTA Performance under Billion Parameters
Zexin Chen
Chengxi Li
Xiangyu Xie
Parijat Dube
ALM
64
2
0
30 Aug 2024
1