ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.05883
46
1

Big-Thick Data generation via reference and personal context unification

26 August 2024
Fausto Giunchiglia
Xiaoyue Li
ArXivPDFHTML
Abstract

Smart devices generate vast amounts of big data, mainly in the form of sensor data. While allowing for the prediction of many aspects of human behaviour (e.g., physical activities, transportation modes), this data has a major limitation in that it is not thick, that is, it does not carry information about the context within which it was generated. Context - what was accomplished by a user, how and why, and in which overall situation - all these factors must be explicitly represented for the data to be self-explanatory and meaningful. In this paper, we introduce Big-Thick Data as highly contextualized data encoding, for each and every user, both her subjective personal view of the world and the objective view of an all-observing third party taken as reference. We model big-thick data by enforcing the distinction between personal context and reference context. We show that these two types of context can be unified in many different ways, thus allowing for different types of questions about the users' behaviour and the world around them and, also, for multiple different answers to the same question. We validate the model with a case study that integrates the personal big-thick data of one hundred and fifty-eight University students over a period of four weeks with the reference context built using the data provided by OpenStreetMap.

View on arXiv
Comments on this paper