Giving feedback after total success rate in a timeline #2834
Unanswered
georginatorok
asked this question in
Q&A
Replies: 1 comment 2 replies
-
Hi @georginatorok, The issue that you are running into is because the The way to solve this is with dynamic parameters. There's also a step in the main tutorial that shows how to aggregate this kind of data and show it to the user. Feel free to follow up here if those resources don't solve the problem! |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all,
I am fairly new to JavaScript and have some difficulties working out how to provide performance-based feedback after a timeline of multiple trials.
In my experiment, I am using the Serial Reaction Time Mouse plugin to give participants a sequence they need to accurately follow. This is implemented in the “exgame_intro_tap” timeline (see code below).
I want to give feedback on the total number of hits (correct trials) in the sequence right after the SRT sequence is done. I managed to calculate the number of hits with an “on_timeline_finish” function after all 6 trials in the sequence are done and assigned it to a variable called “accuracy_exgame_intro_tap“. Console.log() can successfully display this variable’s value while the task is running. However, I am having trouble accessing and using that value in the next timeline where I want to use the number of hits to give feedback on the participant’s success rate. I tried to add it to the data with jsPsych.data.addProperties() in the on_timeline_finish function, but it doesn’t work. I get the following error:
Uncaught ReferenceError: accuracy is not defined
Could you help me with how to save the success rate after a timeline is finished, so that it can be used in a subsequent feedback trial? I’m probably not saving it in the right place/right way. Is this because the .addProperties() works at the very end of the experiment, so at the time of trying to run the feedback trial, the variable is not yet created? Or something to do with a globally vs locally accessible variables issue? Where and how do I need to save the calculated accuracy value in order for it to be accessible for the feedback trial?
Best,
Gina
Beta Was this translation helpful? Give feedback.
All reactions