A multimodal speech-production dataset with time-aligned articulography, EEG, audio, and vocal-tract anatomy

Ref. 3123

  

Datensatzübersicht

Datensatz-Titel

A multimodal speech-production dataset with time-aligned articulography, EEG, audio, and vocal-tract anatomy

Kanonischer DOI

Ermöglicht das Zitieren des gesamten Datensatzes, unabhängig von Versionen.

https://doi.org/10.48656/6y1s-px92

DOI

Ermöglicht das Zitieren einer spezifischen Datensatzversion.

https://doi.org/10.48656/vc7s-pt02

Sprache der Datensatzbeschreibung

Englisch

Datensatz URL

-

Verfügbarkeit der Daten

-

Datensatzbeschreibung

We present a multimodal speech-production dataset combining simultaneous electromagnetic articulography (EMA; 1,250 Hz), electroencephalography (EEG; 2,048 Hz), and 48 kHz audio from 29 German-speaking adults. All participants have external craniofacial anthropometry; an anatomical subset (N = 18) adds acoustic pharyngometry, rhinometry, and 3D head meshes. Speech materials include alternating and sequential motion-rate syllable sequences at habitual and maximally fast rates with high trial counts; EMA + audio also cover passage reading, sustained vowels, palate tracing, and non-speech oromotor actions. EEG was recorded for syllable blocks. Cross-modal synchronisation is achieved via hardware triggers, yielding sub-millisecond alignment. All streams are released as raw and minimally processed files with a stable event-code map, machine-readable metadata, and open-source Python scripts (with a reproducible container) for data loading, synchronisation, and minimal preprocessing. This resource supports analyses of neural activity time-locked to articulatory onsets, gesture sequencing and rate effects, benchmarking of overt-speech EEG artefact handling, and studies linking vocal-tract anatomy to articulatory dynamics.

Bemerkungen zur Dokumentation

-

Versionsnummer

1.0

Enddatum des Embargos

-

Publikationsdatum

07.11.2025

Hinweise zur Version

Version 1.0

Bibliografische Zitierung

Friedrichs, D., Vyshnevetska, V., Lancheros Pompeyo, M. P., Bolt, E., Dellwo, V., & Moran, S. ([DATE UNKNOWN]). A multimodal speech-production dataset with time-aligned articulography, EEG, audio, and vocal-tract anatomy (Version 1.0) [Data set]. LaRS - Language Repository of Switzerland. https://doi.org/10.48656/vc7s-pt02

MD5-Hash des DIP

fbb8df388de4771c0cd536db8e2a41f6

Inhalt des Datensatzes

/
metadata.yaml