|
| 1 | +--- |
| 2 | +type: "project" |
| 3 | +date: "2025-06-14" |
| 4 | +title: "Understanding Learning Trajectories in VRIT: Dynamic Behavioral and Neural Signatures of Inference" |
| 5 | +names: ["Tzu-Yun Kung"] |
| 6 | +github_repo: "https://github.com/TYK0903/Tzu-Yun_Kung_project" |
| 7 | +tags: [inference, learning trajectories, python, fmri] |
| 8 | +image: "cover.png" |
| 9 | +summary: "This project aims to examine how the brain supports two types of inference—active and passive—during the process of posterior belief integration. I applied trial-by-trial analysis to track participants’ learning trajectories over time." |
| 10 | +--- |
| 11 | + |
| 12 | +## Project definition |
| 13 | + |
| 14 | +### Background |
| 15 | + |
| 16 | +Inferencing is a fundamental process through which the human brain updates its beliefs about environmental causes and future states. This belief-updating can occur via passive inference, where prior beliefs and outcomes are only loosely connected, and new beliefs primarily emerge from observing associations in the environment. |
| 17 | +However, inference is not always passive. It can also take an active form, involving hypothesis-driven actions that manipulate contextual elements and actively test the causal links between beliefs and outcomes. |
| 18 | +To explore this distinction, our study aims to examine how the brain supports these two types of inference—active and passive—during the process of posterior belief integration. To this end, our lab designed the Visual Rule Inference Task (VRIT), a paradigm specifically developed to probe the neural mechanisms involved in these distinct inferential processes. |
| 19 | +The goal of this project is to uncover the inference processes that underlie participants’ behavior as they perform the VRIT. To do so, I analyzed both neural and behavioral data, with a particular focus on how participants engage in inference under different task conditions—namely, active versus passive inference. More specifically, I applied trial-by-trial analysis to track participants’ learning trajectories over time. |
| 20 | + |
| 21 | +<iframe width="560" height="315" src="https://www.youtube.com/embed/PTYs_JFKsHI" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> |
| 22 | + |
| 23 | +### Tools |
| 24 | + |
| 25 | +The project will rely on the following technologies: |
| 26 | + * Python, to organize data, analyze and visualize behavior and brain data. |
| 27 | + * SPM, to analyze brain data. |
| 28 | + * Adding the project to the website relies on github, through pull requests. |
| 29 | + |
| 30 | +### Data |
| 31 | + |
| 32 | +The dataset I used in this project is from Brain and Mind Lab, Taiwan. https://gibms.mc.ntu.edu.tw/bmlab/ |
| 33 | + |
| 34 | + |
| 35 | +### Deliverables |
| 36 | + - "As the study is still unpublished, this repository contains only the Python scripts I wrote and shared as part of my learning experience during the Brainhack School." |
| 37 | + |
| 38 | +## Results |
| 39 | + |
| 40 | +### Progress overview |
| 41 | + * 1. Use Python to organize the data. |
| 42 | + * 2. Use Python to analyze and visualize the behavioral data. |
| 43 | + * 3. Write scripts to create batches of brain image preprocessing. |
| 44 | + * 4. Organize the onset time and durations by Python. |
| 45 | + * 5. Create contrast files. |
| 46 | + * 6. I used Python to troubleshoot issues by analyzing beta values. |
| 47 | + * 7. I created visualized brain images using Python. |
| 48 | + |
| 49 | +### Tools I learned during this project |
| 50 | + * Python, to organize data, analyze and visualize behavior and brain data. |
| 51 | + * SPM, to analyze brain data. |
| 52 | + * Adding the project to the website relies on github, through pull requests. |
| 53 | + |
| 54 | + |
| 55 | +## Conclusion and acknowledgement |
| 56 | +From the results, we observed notable differences between the two types of inference, as well as variations in their learning trajectories. These findings suggest the need for more in-depth analyses in the future. |
0 commit comments