ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.08250
51
1

OmniQuery: Contextually Augmenting Captured Multimodal Memory to Enable Personal Question Answering

24 February 2025
Jiahao Nick Li
Zhuohao Jerry Zhang
Zhang
ArXivPDFHTML
Abstract

People often capture memories through photos, screenshots, and videos. While existing AI-based tools enable querying this data using natural language, they only support retrieving individual pieces of information like certain objects in photos, and struggle with answering more complex queries that involve interpreting interconnected memories like sequential events. We conducted a one-month diary study to collect realistic user queries and generated a taxonomy of necessary contextual information for integrating with captured memories. We then introduce OmniQuery, a novel system that is able to answer complex personal memory-related questions that require extracting and inferring contextual information. OmniQuery augments individual captured memories through integrating scattered contextual information from multiple interconnected memories. Given a question, OmniQuery retrieves relevant augmented memories and uses a large language model (LLM) to generate answers with references. In human evaluations, we show the effectiveness of OmniQuery with an accuracy of 71.5%, outperforming a conventional RAG system by winning or tying for 74.5% of the time.

View on arXiv
@article{li2025_2409.08250,
  title={ OmniQuery: Contextually Augmenting Captured Multimodal Memory to Enable Personal Question Answering },
  author={ Jiahao Nick Li and Zhuohao Jerry Zhang and Jiaju Ma },
  journal={arXiv preprint arXiv:2409.08250},
  year={ 2025 }
}
Comments on this paper