Author: rmt

Paper accepted at Frontiers in Communication

Be on the lookout for a new paper to appear in Frontiers in Communication (Language Sciences). The title is “Individual differences in the use of acoustic-phonetic versus lexical cues for speech perception.” This work was led by Nikole Giovannone, a Ph.D. student in the lab. Data/script and preprint are available on the Open Science Framework. Congratulations, Nikole!

Paper accepted at Psychonomic Bulletin & Review

Be on the lookout for a new paper to appear in Psychonomic Bulletin & ReviewThe title is “A second chance for a first impression: Sensitivity to cumulative input statistics for lexically guided perceptual learning.” This work was conducted in collaboration with Drs. Lynne Nygaard and Christina Tzeng at Emory University. Data/script and preprint are available on the Open Science Framework, here!

Dr. Theodore speaks at #BeOnline2020

Dr. Theodore participated in the “All about auditory research online” panel at the Behavioral Science Online meeting. Her slides are available here; recordings of all talks will be available at the conference website shortly. Many thanks to the conference organizers for hosting such an efficient and informative gathering!

Paper accepted at Language and Linguistics Compass

Be on the lookout for a new paper to appear in Language and Linguistics Compass. The title is “Leveraging interdisciplinary perspectives to optimize auditory training for cochlear implant users” This review was led by Ph.D. student Julia Drouin; reach out (julia.drouin@uconn.edu) if you’d like a preprint. Congratulations, Julia!

Victoria Zysk submits Honors thesis

Victoria Zysk has successfully completed her Honors thesis titled “The effect of sleep-based memory consolidation on adaptation to noise-vocoded speech.” This work was conducted in collaboration with Dr. Emily Myers and Julia Drouin. We’ll have a preprint on the OSF soon. Congratulations, Victoria!

Tutorial for conducting online speech perception experiments

We’ve been fielding a lot of questions regarding online data collection for speech perception experiments as many of us prepare for disruptions to in-person data collection. We’ve put together a brief tutorial to share some of our successes, challenges, and advice.

A PDF of this page can be downloaded here, but the dynamic page is likely to be more current than the static PDF.

Please don’t hesitate to reach out to me if you have additional questions or if you have feedback/suggestions for making this a better resource for the community in these challenging times.

-rmt

Paper accepted at Cognitive Science

Be on the lookout for a new paper to appear in Cognitive Science titled “EARSHOT: A minimal neural network model of incremental human speech recognition.” This work was led by Drs. James Magnuson, Heejo You, and Jay Rueckl at the University of Connecticut. A preprint of an earlier version of this paper is available here.