Skip to content

Commit e6dd933

Browse files
upload a page for my project (#372)
* upload a page for my project * Delete content/en/project/ShahirAdha_project/project_code.ipynb * Update content/en/project/ShahirAdha_project/index.md Co-authored-by: Lune Bellec <lune.bellec@umontreal.ca> --------- Co-authored-by: Lune Bellec <lune.bellec@umontreal.ca>
1 parent bc03367 commit e6dd933

File tree

3 files changed

+83
-0
lines changed

3 files changed

+83
-0
lines changed
129 KB
Loading
Lines changed: 83 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,83 @@
1+
---
2+
type: "project" # DON'T TOUCH THIS ! :)
3+
date: "2025-06-12" # Date you first upload your project.
4+
# Title of your project (we like creative title)
5+
title: "Decoding Perceived Emotion from BOLD data using Machine Learning"
6+
7+
# List the names of the collaborators within the [ ]. If alone, simple put your name within []
8+
names: [Muhammad Shahir Adha]
9+
10+
# Your project GitHub repository URL
11+
github_repo: (https://github.com/MShahirAdha/Adha_project/tree/main)
12+
13+
# If you are working on a project that has website, indicate the full url including "https://" below or leave it empty.
14+
website:
15+
16+
# List +- 4 keywords that best describe your project within []. Note that the project summary also involves a number of key words. Those are listed on top of the [github repository](https://github.com/PSY6983-2021/project_template), click `manage topics`.
17+
# Please only lowercase letters
18+
tags: [emotion, machine learning, fmri, perception]
19+
20+
# Summarize your project in < ~75 words. This description will appear at the top of your page and on the list page with other projects..
21+
22+
summary: "This project applies machine learning to decode perceived emotions from fMRI data using ROI-based features. Data from the ds003548 OpenNeuro dataset are analyzed, with task labels extracted from events files. ROI time series are extracted using the MIST 64-ROI atlas, and mean signals during emotion blocks are classified using linear SVM. The goal is to distinguish between six conditions (happy, sad, angry, neutral, blank, scrambled), demonstrating key concepts and challenges in neuroimaging-based classification."
23+
24+
# If you want to add a cover image (listpage and image in the right), add it to your directory and indicate the name
25+
# below with the extension.
26+
image: "confusion_matrix_test.png"
27+
---
28+
<!-- This is an html comment and this won't appear in the rendered page. You are now editing the "content" area, the core of your description. Everything that you can do in markdown is allowed below. We added a couple of comments to guide your through documenting your progress. -->
29+
30+
## Project definition
31+
32+
### Background
33+
34+
Understanding how the brain processes emotions is a central challenge in cognitive neuroscience. Functional Magnetic Resonance Imaging (fMRI) allows researchers to non-invasively measure brain activity during emotional tasks, but traditional univariate methods often miss the distributed nature of neural representations.
35+
36+
This project uses multivariate pattern analysis (MVPA) to decode emotional states from fMRI data in the OpenNeuro dataset ds003548. Participants viewed emotional and non-emotional face stimuli across several runs. Smoothed, denoised BOLD images were used to extract region-wise average signals based on the MIST 64-region functional atlas. A linear Support Vector Machine (SVM) was trained to classify six conditions: happy, sad, angry, neutral, scrambled, and dim. This approach demonstrates how machine learning can reveal distributed emotion-related patterns in the brain.
37+
38+
### Tools
39+
40+
The project will rely on the following technologies:
41+
* Python: nibabel, pandas, numpy, scikit-learn, nilearn, matplotlib
42+
* Adding the project to the website relies on github, through pull requests.
43+
44+
### Data
45+
46+
This project uses publicly available fMRI data from the OpenNeuro dataset ds003548, titled "Emotion Category and Face Perception Task Optimized for Multivariate Pattern Analysis". The dataset includes BOLD fMRI scans from participants viewing emotional and non-emotional face stimuli across multiple runs.
47+
48+
Preprocessing was performed using fMRIPrep, and the analysis uses smoothed, denoised BOLD images in MNI space. Task condition labels were extracted from events.tsv files, and brain activity was summarized using the MIST 64-region functional atlas.
49+
50+
### Deliverables
51+
52+
At the end of this project, we will have:
53+
- The current markdown document, completed and revised.
54+
- Model test accuracy results and confusion matrix
55+
- Jupyter notebook code
56+
57+
## Results
58+
59+
### Progress overview
60+
61+
This project was developed as part of BrainHack School 2025. I adapted material from the tutorials to perform emotion classification using fMRI data from the OpenNeuro ds003548 dataset. The pipeline includes data loading, ROI-based feature extraction, and machine learning classification using a linear SVM. The process was straightforward, and community feedback may help refine future iterations.
62+
63+
### Tools I learned during this project
64+
65+
* **fMRI analysis** I learned how to work with BIDS-formatted fMRI data, use nibabel and nilearn to load and process brain images, and apply ROI-based feature extraction using the MIST 64 atlas.
66+
* **Machine Learning for Neuroimaging** I applied scikit-learn to perform classification using a linear SVM, using cross-validation strategies suited for subject-level generalization.
67+
* **GitHub & Collaboration** I practiced using GitHub for project version control and learned how to structure a project for reproducibility and collaboration.
68+
69+
### Results
70+
71+
#### Deliverable 1: Jupyter Notebook code
72+
73+
A Jupyter notebook containing my entire project pipeline, with additional exploratory analyses, is included in the repository.
74+
75+
#### Deliverable 2: project results
76+
77+
The model achieved an overall accuracy of 51.5% on the test data. It model performs very well in identifying control conditions like Blank and Scrambled images, with over 90% accuracy. However, the model struggles more with the emotional categories. For example, Angry is correctly classified only 30% of the time, Happy 23%, Neutral 20%, and Sad performs somewhat better at 47%.
78+
These lower accuracies likely reflect the more subtle and overlapping neural signatures of emotions, making them more challenging to distinguish.
79+
80+
81+
## Conclusion and acknowledgement
82+
83+
This project highlights the applications of MVPA on neuroimaging data to predict perceived emotion. Thanks to all Brainhack School collaborators from NTU SG and NTU TW for the wonderful learning opportunity!
83.7 KB
Loading

0 commit comments

Comments
 (0)