ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.24679
5
0

Beyond FACS: Data-driven Facial Expression Dictionaries, with Application to Predicting Autism

30 May 2025
Evangelos Sariyanidi
Lisa Yankowitz
Robert T. Schultz
John D. Herrington
Birkan Tunc
Jeffrey Cohn
ArXivPDFHTML
Abstract

The Facial Action Coding System (FACS) has been used by numerous studies to investigate the links between facial behavior and mental health. The laborious and costly process of FACS coding has motivated the development of machine learning frameworks for Action Unit (AU) detection. Despite intense efforts spanning three decades, the detection accuracy for many AUs is considered to be below the threshold needed for behavioral research. Also, many AUs are excluded altogether, making it impossible to fulfill the ultimate goal of FACS-the representation of any facial expression in its entirety. This paper considers an alternative approach. Instead of creating automated tools that mimic FACS experts, we propose to use a new coding system that mimics the key properties of FACS. Specifically, we construct a data-driven coding system called the Facial Basis, which contains units that correspond to localized and interpretable 3D facial movements, and overcomes three structural limitations of automated FACS coding. First, the proposed method is completely unsupervised, bypassing costly, laborious and variable manual annotation. Second, Facial Basis reconstructs all observable movement, rather than relying on a limited repertoire of recognizable movements (as in automated FACS). Finally, the Facial Basis units are additive, whereas AUs may fail detection when they appear in a non-additive combination. The proposed method outperforms the most frequently used AU detector in predicting autism diagnosis from in-person and remote conversations, highlighting the importance of encoding facial behavior comprehensively. To our knowledge, Facial Basis is the first alternative to FACS for deconstructing facial expressions in videos into localized movements. We provide an open source implementation of the method atthis http URL.

View on arXiv
@article{sariyanidi2025_2505.24679,
  title={ Beyond FACS: Data-driven Facial Expression Dictionaries, with Application to Predicting Autism },
  author={ Evangelos Sariyanidi and Lisa Yankowitz and Robert T. Schultz and John D. Herrington and Birkan Tunc and Jeffrey Cohn },
  journal={arXiv preprint arXiv:2505.24679},
  year={ 2025 }
}
Comments on this paper