|
13 | 13 | "id": "cd4144b5", |
14 | 14 | "metadata": {}, |
15 | 15 | "source": [ |
16 | | - "In the IBL task a visual stimulus (Gabor patch) appears on the left (-35°) or right (+35°) of a screen and the mouse must use a wheel to bring the stimulus to the centre of the screen (0°). If the mouse moves the wheel in the correct direction, the trial is deemed correct and the mouse receives a reward, if however, the mouse moves the wheel in the wrong direction and the stimulus goes off the screen, this is an error trial and the mouse receives a white noise error tone. \n", |
| 16 | + "In the IBL task a visual stimulus (Gabor patch of size 7°<sup>2</sup>) appears on the left (-35°) or right (+35°) of a screen and the mouse must use a wheel to bring the stimulus to the centre of the screen (0°). If the mouse moves the wheel in the correct direction, the trial is deemed correct and the mouse receives a reward. If however, the mouse moves the wheel 35° in the wrong direction and the stimulus goes off the screen, this is an error trial and the mouse receives a white noise error tone. The screen was positioned 8 cm in front of the animal and centralized relative to the position of eyes to cover ~102 visual degree azimuth. In the case that the mouse moves the stimulus 35° in the wrong direction, the stimulus, therefore is visible for 20° and the rest is off the screen.\n", |
17 | 17 | "\n", |
18 | | - "For some analysis it may be useful to know the position of the visual stimulus on the screen during a trial. While there is no direct read out of the location of the stimulus on the screen, as the stimulus is coupled to the wheel, we can infer the position using the wheel position. \n", |
| 18 | + "For some analyses it may be useful to know the position of the visual stimulus on the screen during a trial. While there is no direct read out of the location of the stimulus on the screen, as the stimulus is coupled to the wheel, we can infer the position using the wheel position. \n", |
19 | 19 | "\n", |
20 | | - "Below we walk you through an example of how to compute the continuous screen position for a given trial.\n", |
| 20 | + "Below we walk you through an example of how to compute the continuous stimulus position on the screen for a given trial.\n", |
21 | 21 | "\n", |
22 | | - "For this anaylsis we need access to information about the wheel radius and the wheel gain (visual degrees moved on screen per mm of wheel movement).\n", |
23 | | - "* Wheel radius = 3.1 cm\n", |
24 | | - "* Wheel gain = 4 (deg / mm)" |
| 22 | + "For this anaylsis we need access to information about the wheel radius (3.1 cm) and the wheel gain (visual degrees moved on screen per mm of wheel movement). The wheel gain changes throughout the training period (see our [behavior paper](https://doi.org/10.7554/eLife.63711\n", |
| 23 | + ") for more information) but for the majority of sessions is set at 4°/mm." |
25 | 24 | ] |
26 | 25 | }, |
27 | 26 | { |
|
221 | 220 | "source": [ |
222 | 221 | "# Find the index of the wheel timestamps when the stimulus was presented (stimOn_times)\n", |
223 | 222 | "idx_stim = np.searchsorted(wh_times, trials['stimOn_times'][tr_idx])\n", |
224 | | - "# Normalise the wh_pos to the position at stimOn\n", |
| 223 | + "# Zero the wh_pos to the position at stimOn\n", |
225 | 224 | "wh_pos = wh_pos - wh_pos[idx_stim]" |
226 | 225 | ] |
227 | 226 | }, |
|
254 | 253 | "outputs": [], |
255 | 254 | "source": [ |
256 | 255 | "GAIN_MM_TO_SC_DEG = 4\n", |
257 | | - "screen_pos = wh_pos * GAIN_MM_TO_SC_DEG" |
| 256 | + "stim_pos = wh_pos * GAIN_MM_TO_SC_DEG" |
258 | 257 | ] |
259 | 258 | }, |
260 | 259 | { |
|
270 | 269 | "id": "e0189229", |
271 | 270 | "metadata": {}, |
272 | 271 | "source": [ |
273 | | - "The screen_pos values that we have above have been computed over the whole trial interval, from trial start to trial end. The stimlus on the screen however is can only move with the wheel between the time at which the stimlus is presented (stimOn_times) and the time at which a choice is made (response_times). After a response is made the visual stimulus then remains in a fixed position until the it disappears from the screen (stimOff_times)" |
| 272 | + "The stim_pos values that we have above have been computed over the whole trial interval, from trial start to trial end. The stimlus on the screen however is can only move with the wheel between the time at which the stimlus is presented (stimOn_times) and the time at which a choice is made (response_times). After a response is made the visual stimulus then remains in a fixed position until the it disappears from the screen (stimOff_times)" |
274 | 273 | ] |
275 | 274 | }, |
276 | 275 | { |
|
293 | 292 | "idx_off = np.searchsorted(wh_times, trials['response_times'][tr_idx])\n", |
294 | 293 | "\n", |
295 | 294 | "# Before stimOn no stimulus on screen, so set to nan\n", |
296 | | - "screen_pos[0:idx_stim - 1] = np.nan\n", |
| 295 | + "stim_pos[0:idx_stim - 1] = np.nan\n", |
297 | 296 | "# Stimulus is in a fixed position between response time and stimOff time\n", |
298 | | - "screen_pos[idx_res:idx_off - 1] = screen_pos[idx_res]\n", |
| 297 | + "stim_pos[idx_res:idx_off - 1] = stim_pos[idx_res]\n", |
299 | 298 | "# After stimOff no stimulus on screen, so set to nan\n", |
300 | | - "screen_pos[idx_off:] = np.nan" |
| 299 | + "stim_pos[idx_off:] = np.nan" |
301 | 300 | ] |
302 | 301 | }, |
303 | 302 | { |
304 | 303 | "cell_type": "markdown", |
305 | 304 | "id": "781fe47f", |
306 | 305 | "metadata": {}, |
307 | 306 | "source": [ |
308 | | - "The screen_pos values are given relative to stimOn times but the stimulus appears at either -35° or 35° depending on the stimlus side. We therefore need to apply this offset to our screen position" |
| 307 | + "The stim_pos values are given relative to stimOn times but the stimulus appears at either -35° or 35° depending on the stimlus side. We therefore need to apply this offset to our stimulus position. We also need to account for the convention that increasing wheel position indicates a counter-clockwise movement and therefore a left-ward (-ve) movement of the stimulus in visual azimuth." |
309 | 308 | ] |
310 | 309 | }, |
311 | 310 | { |
|
328 | 327 | " # The stimulus appeared on the right\n", |
329 | 328 | " # Values for the screen position will be >0\n", |
330 | 329 | " offset = ONSET_OFFSET # The stimulus starts at +35 and goes to --> 0\n", |
331 | | - " screen_pos = -1 * screen_pos + offset\n", |
| 330 | + " stim_pos = -1 * stim_pos + offset\n", |
332 | 331 | "else:\n", |
333 | 332 | " # The stimulus appeared on the left\n", |
334 | 333 | " # Values for the screen position will be <0\n", |
335 | 334 | " offset = -1 * ONSET_OFFSET # The stimulus starts at -35 and goes to --> 0\n", |
336 | | - " screen_pos = -1 * screen_pos + offset" |
| 335 | + " stim_pos = -1 * stim_pos + offset" |
337 | 336 | ] |
338 | 337 | }, |
339 | 338 | { |
|
378 | 377 | "axs[0].set_ylabel('Wheel displacement (mm)')\n", |
379 | 378 | "\n", |
380 | 379 | "\n", |
381 | | - "# On bottom axis plot the screen position\n", |
382 | | - "axs[1].plot(wh_times, screen_pos, 'k')\n", |
| 380 | + "# On bottom axis plot the stimulus position\n", |
| 381 | + "axs[1].plot(wh_times, stim_pos, 'k')\n", |
383 | 382 | "axs[1].vlines([trials['stimOn_times'][tr_idx], trials['response_times'][tr_idx]],\n", |
384 | 383 | " 0, 1, transform=axs[1].get_xaxis_transform(), colors='k', linestyles='dashed')\n", |
385 | 384 | "axs[1].set_xlim(trials['intervals'][tr_idx])\n", |
|
392 | 391 | "\n", |
393 | 392 | "axs[1].set_ylim([-90, 90])\n", |
394 | 393 | "axs[1].set_xlim(trials['stimOn_times'][tr_idx] - 0.1, trials['response_times'][tr_idx] + 0.1)\n", |
395 | | - "axs[1].set_ylabel('Screen position (°)')\n", |
| 394 | + "axs[1].set_ylabel('Visual azimuth angle (°)')\n", |
396 | 395 | "axs[1].set_xlabel('Time in session (s)')\n", |
397 | 396 | "fig.suptitle(f\"ContrastLeft: {trials['contrastLeft'][tr_idx]}, ContrastRight: {trials['contrastRight'][tr_idx]},\"\n", |
398 | 397 | " f\"FeedbackType {trials['feedbackType'][tr_idx]}\")\n", |
|
0 commit comments