We conducted a feasibility analysis to determine the quality of data that could be collected ambiently during routine clinical conversations. We used inexpensive, consumer-grade hardware to record unstructured dialogue and open-source software tools to quantify and model face, voice (acoustic and language) and movement features. We used an external validation set to perform proof-of-concept predictive analyses and show that clinically relevant measures can be produced without a restrictive protocol.
Keywords: acoustic; conversation; digital phenotype; facial feature; voice.
Copyright: © 2022 The Author(s).