Nikole Giovannone receives NSF NRT

Congratulations to SLaP Lab Ph.D. student Nikole Giovannone, who has been selected to join the National Science Foundation Research Traineeship (NRT) at UConn. This training program, “Science of Learning and Art of Communication,” draws on subfields of cognitive science and neuroscience. Trainees and their mentors will develop new, team-based, interdisciplinary approaches to learning, and will also learn how communicate effectively with a wide range of audiences. Congrats, Nikole!

Paper accepted at Bilingualism: Language and Cognition

Be on the lookout for a new paper to appear in Bilingualism: Language and Cognition! The title is “Determinants of voice recognition in monolingual and bilingual listeners.” This work was conducted in collaboration with Erin Flanagan, a SLaP Lab Honors student. Reach out to rachel.theodore@uconn.edu if you’d like a preprint, and congratulations, Erin!

Paper accepted at Psychonomic Bulletin and Review

Be on the lookout for a new paper to appear in Psychonomic Bulletin and Review! The title is “Distributional learning for speech reflects cumulative exposure to a talker’s phonetic distributions.” Reach out to rachel.theodore@uconn.edu if you’d like a preprint, and congratulations, Nick!

Dr. Theodore receives NSF award

Dr. Theodore has received a 3-year grant from the Division of Behavioral and Cognitive Sciences of the National Science Foundation titled “Collaborative research: An integrated model of phonetic analysis and lexical access based on individual acoustic cues to features.” The activities will be completed by teams at UConn (with Dr. James Magnuson and Dr. Paul Allopenna) and MIT (Dr. Stefanie Shattuck-Hufnagel and Dr. Elizabeth  Choi). The public abstract is shown below.

Abstract: One of the greatest mysteries in the cognitive and neural sciences is how humans achieve robust speech perception given extreme variation in the precise acoustics produced for any given speech sound or word. For example, people can produce different acoustics for the same vowel sound, while in other cases the acoustics for two different vowels may be nearly identical. The acoustic patterns also change depending on the rate at which the sounds are spoken.  Listeners may also perceive a sound that was not actually produced due to massive reductions in speech pronunciation (e.g., the “t” and “y” sounds in “don’t you” are often reduced to “doncha”). Most theories assume that listeners recognize words in continuous speech by extracting consonants and vowels in a strictly sequential order. However, previous research has failed to find evidence for robust, invariant information in the acoustic signal that would allow listeners to extract the important information.

This project uses a new tool for the study of language processing, LEXI (for Linguistic-Event EXtraction and Interpretation), to test the hypothesis that individual acoustic cues for consonants and vowels can be extracted from the signal and can be used to determine the speaker’s intended words. When some acoustic cues for speech sounds are modified or missing, LEXI can detect the remaining cues and interpret them as evidence for the intended sounds and words. This research has potentially broad societal benefits, including optimization of machine-human interactions to accommodate atypical speech patterns seen in speech disorders or accented speech. This project supports training of 1-2 doctoral students and 8-10 undergraduate students through hands-on experience in experimental and computational research. All data, including code for computational models, the LEXI system, and speech databases labeled for acoustic cues will be publicly available through the Open Science Framework; preprints of all publications will be publicly available at PsyArxiv and NSF-PAR.

This interdisciplinary project unites signal analysis, psycholinguistic experimentation, and computational modeling to (1) survey the ways that acoustic cues vary in different contexts, (2) experimentally test how listeners use these cues through distributional learning for speech, and (3) use computational modeling to evaluate competing theories of how listeners recognize spoken words. The work will identify cue patterns in the signal that listeners use to recognize massive reductions in pronunciation and will experimentally test how listeners keep track of this systematic variation. This knowledge will be used to model how listeners “tune in” to the different ways speakers produce speech sounds. By using cues detected by LEXI as input to competing models of word recognition, the work provides an opportunity to examine the fine-grained time course of human speech recognition with large sets of spoken words; this is an important innovation because most cognitive models of speech do not work with speech input directly. Theoretical benefits include a strong test of the cue-based model of word recognition and the development of tools to allow virtually any model of speech recognition to work on real speech input, with practical implications for optimizing automatic speech recognition.

Paper accepted at Frontiers in Communication

Be on the lookout for a new paper to appear in Frontiers in Communication (Language Sciences)! The title is “Contextual influences on phonetic categorization in school-aged children.” This work was conducted in collaboration with SLaP Lab students Jean Campbell and Heather McSherry. Reach out to rachel.theodore@uconn.edu if you’d like a preprint, and congratulations, Jean and Heather!

Nick Monto receives ASA Stetson scholarship

Congratulations to Nicholas Monto, who has been selected as a 2018-2019 Stetson Scholar of the Acoustical Society of America. Nick’s project uses a distributional learning paradigm to examine the time-course of adaptation to talker-specific phonetic variation (Aim 1) and to identify factors that contribute to individual differences in perceptual learning for speech (Aim 2). The data generated from these studies will contribute towards improved computational models of dynamic adaptation in speech perception and will help identify potential loci of language impairment. Congratulations, Nick!

 

Nick Monto completes general exam

Congratulations to Nick Monto, who defended his comprehensive exam* titled “You say tomato, I say tomahto: Computational accounts of talker specificity in human speech perception.” Looking forward to the dissertation prospectus!

*Title inspired by the great Christopher Walken; cue to 56 seconds, here.