A multimodal speech-production dataset with time-aligned articulography, EEG, audio, and vocal-tract anatomy

Ref. 3123

  

Datensatzübersicht

Datensatz-Titel

A multimodal speech-production dataset with time-aligned articulography, EEG, audio, and vocal-tract anatomy

Kanonischer DOI

Ermöglicht das Zitieren des gesamten Datensatzes, unabhängig von Versionen.

https://doi.org/10.48656/6y1s-px92

DOI

Ermöglicht das Zitieren einer spezifischen Datensatzversion.

https://doi.org/10.48656/vc7s-pt02

Sprache der Datensatzbeschreibung

Englisch

Datensatz URL

-

Verfügbarkeit der Daten

-

Datensatzbeschreibung

We present a multimodal speech-production dataset combining simultaneous electromagnetic articulography (EMA; 1,250 Hz), electroencephalography (EEG; 2,048 Hz), and 48 kHz audio from 29 German-speaking adults. All participants have external craniofacial anthropometry; an anatomical subset (N = 18) adds acoustic pharyngometry, rhinometry, and 3D head meshes. Speech materials include alternating and sequential motion-rate syllable sequences at habitual and maximally fast rates with high trial counts; EMA + audio also cover passage reading, sustained vowels, palate tracing, and non-speech oromotor actions. EEG was recorded for syllable blocks. Cross-modal synchronisation is achieved via hardware triggers, yielding sub-millisecond alignment. All streams are released as raw and minimally processed files with a stable event-code map, machine-readable metadata, and open-source Python scripts (with a reproducible container) for data loading, synchronisation, and minimal preprocessing. This resource supports analyses of neural activity time-locked to articulatory onsets, gesture sequencing and rate effects, benchmarking of overt-speech EEG artefact handling, and studies linking vocal-tract anatomy to articulatory dynamics.

Bemerkungen zur Dokumentation

-

Versionsnummer

1.2

Enddatum des Embargos

-

Publikationsdatum

12.03.2026

Hinweise zur Version

Minor updates to data documentaion, code, and README file.

Bibliografische Zitierung

Friedrichs, D., Vyshnevetska, V., Lancheros Pompeyo, M. P., Bolt, E., Dellwo, V., & Moran, S. (2026). A multimodal speech-production dataset with time-aligned articulography, EEG, audio, and vocal-tract anatomy (Version 1.2) [Data set]. LaRS - Language Repository of Switzerland. https://doi.org/10.48656/vc7s-pt02

MD5-Hash des DIP

abd9b70ecf9c137e70c7ebe5508e8a97

Inhalt des Datensatzes

/
README.md
metadata.yaml