Wearable devices with embedded sensors can provide personalized healthcare and wellness benefits in digital phenotyping and adaptive interventions. However, the collection, storage, and transmission of biometric data (including processed features rather than raw signals) from these devices pose significant privacy concerns. This quantitative, data-driven study examines the privacy risks associated with wearable-based digital phenotyping practices, with a focus on user reidentification (ReID), which is the process of identifying participants' IDs from deidentified digital phenotyping datasets. We propose a machine-learning-based computational pipeline to evaluate and quantify model outcomes under various configurations, such as modality inclusion, window length, and feature type and format, to investigate the factors influencing ReID risks and their predictive trade-offs. This pipeline leverages features extracted from three wearable sensors, resulting in up to 68.43% accuracy in ReID risk for a sample size of N=45 socially anxious participants based on only descriptive features of 10-second observations. Additionally, we explore the trade-offs between privacy risks and predictive benefits by adjusting various settings (e.g., the ways to process extracted features). Our findings highlight the importance of privacy in digital phenotyping and suggest potential future directions.