An auditory-mediated communication paradigm for evaluating individual needs and motivational states in locked-in patients.

Published: 28 October 2025| Version 1 | DOI: 10.17632/f2mg2n6cwv.1
Contributors:
,

Description

The stimulus set was used in the ERP paper "Decoding Motivational States and Craving through Electrical Markers for Neural 'Mind Reading’ by Proverbio AM & Zanetti A (2025). The aim of this study was to identify electrical neuromarkers of 12 different motivational and physiological states (such as visceral craves, affective and somatosensory states, and secondary needs) in LIS, coma, or minimally conscious state patients. Auditory stimuli were designed by combining a human expressive voice with a background sound to evoke a context related to the targeted needs. The stimuli included: primary or visceral needs (hunger, thirst, and sleep), homeostatic or somatosensory sensations (cold, heat, and pain), emotional or affective states (sadness, joy, and fear), and secondary needs (desire for music, movement, and play). 17 audio clips were recorded for each micro-category, each replicated twice: once with a male voice and once with a female voice, totaling 408 stimuli. Audacity software was used to combining the vocal track with a background context coherent with the verbal content. Human voices were recorded using Microphone 202 K38 by Hompower (SNR = 80 dB). The semantic content, the prosodic intonation and the emotional tone of all voices were coherent and appropriately matched. Some of the background sounds were recorded using the same microphone, while others were sourced from the publicly accessible BBC Sound Effects library for scientific purposes (https://sound-effects.bbcrewind.co.uk/search). Research was funded by ATE – Fondo di Ateneo No. 31159-2019-ATE-0064, University of Milano-Bicocca. The research project, entitled “Auditory imagery in BCI mental reconstruction” was preapproved by the Research Assessment Committee of the Department of Psychology (CRIP) for minimal risk projects, under the aegis of the Ethical committee of University of Milano-Bicocca, on February 9th, 2024, protocol n: RM-2024-775).

Files are not publicly available

You can contact the author to request the files

Steps to reproduce

Patients are seated in a comfortable, semi-reclined chair and wear headphones to receive auditory stimuli. Throughout the recording session, they are instructed to maintain visual fixation on a central target—a luminous cross presented at the center of the screen, if possible—or to keep their eyes closed, while remaining still and relaxed to minimize movement artifacts. From a brain–computer interface (BCI) perspective, the experimental paradigm is designed to elicit distinct neural signatures associated with internally generated motivational states. Participants listen to auditory cues and are instructed to vividly re-experience the corresponding motivational condition, engaging in first-person mental imagery as realistically as possible. During the imagery phase, a yellow frame appears on the screen, defining the temporal window used for event-related potential (ERP) alignment and feature extraction. At the end of each trial sequence, a question related to the preceding auditory stimulus is presented, ensuring sustained attention and compliance while providing behavioral markers that can be correlated with neural data.

Institutions

University of Milano-Bicocca

Departments

Department of Psychology

Categories

Brain Stimulation, Aided Communication, Hospital Setting, Brain-Computer Interface, AI-Human Interaction, Advanced Communication

Funding

University of Milano-Bicocca

No. 31159-2019-ATE-0064