Purpose: The aim of the present report was to explore whether vowel metrics, demonstrated to distinguish dysarthric and healthy speech in a companion article (Lansford & Liss, 2014), are able to predict human perceptual performance.
Method: Vowel metrics derived from vowels embedded in phrases produced by 45 speakers with dysarthria were compared with orthographic transcriptions of these phrases collected from 120 healthy listeners. First, correlation and stepwise multiple regressions were conducted to identify acoustic metrics that had predictive value for perceptual measures. Next, discriminant function analysis misclassifications were compared with listeners' misperceptions to examine more directly the perceptual consequences of degraded vowel acoustics.
Results: Several moderate correlative relationships were found between acoustic metrics and perceptual measures, with predictive models accounting for 18%-75% of the variance in measures of intelligibility and vowel accuracy. Results of the second analysis showed that listeners better identified acoustically distinctive vowel tokens. In addition, the level of agreement between misclassified-to-misperceived vowel tokens supports some specificity of degraded acoustic profiles on the resulting percept.
Conclusion: Results provide evidence that degraded vowel acoustics have some effect on human perceptual performance, even in the presence of extravowel variables that naturally exert influence in phrase perception.