Inspired by Galvez-Pol et al's (2022) results describing our ability to depict HR on faces, we are investigating if this ability can be observed in voice and how the pitch could be a most relevant acoustical parameters in this matter, if true.
OSF project available HERE
The experiment is implemented using the jsPsych library, and a preprocessing Python script is available.
The experiment consist in the presentation of two sound stimuli, the voice of participant, with a visual indication of the heart rate. The participant then have to choose between the two pairs, which one seems to be the correct one for them. Then they respond on a confidence rating, and pass to the next trial.
All trials are separated in three blocks, corresponding to three conditions : Inter subjects, intra subjects, and pitch transform (an intra condition with a manipulation of the pitch).
The website of the experiment is available here : https://matthieufra.github.io/Heart_rate_dectection_in_voice/
You can run the experiment in your own browser here : https://matthieufra.github.io/Heart_rate_dectection_in_voice/heart-voice/heart-voice2.html