We present a multimodal speech-production dataset combining simultaneous electromagnetic articulography (EMA; 1,250 Hz), electroencephalography (EEG; 2,048 Hz), and 48 kHz audio from 29 German-speaking adults. All participants have external craniofacial anthropometry; an anatomical subset (N = 18) adds acoustic pharyngometry, rhinometry, and 3D head meshes. Speech materials include alternating and sequential motion-rate syllable sequences at habitual and maximally fast rates with high trial counts; EMA + audio also cover passage reading, sustained vowels, palate tracing, and non-speech oromotor actions. EEG was recorded for syllable blocks. Cross-modal synchronisation is achieved via hardware triggers, yielding sub-millisecond alignment. All streams are released as raw and minimally processed files with a stable event-code map, machine-readable metadata, and open-source Python scripts (with a reproducible container) for data loading, synchronisation, and minimal preprocessing. This resource supports analyses of neural activity time-locked to articulatory onsets, gesture sequencing and rate effects, benchmarking of overt-speech EEG artefact handling, and studies linking vocal-tract anatomy to articulatory dynamics.