Skip to content
This repository was archived by the owner on Sep 4, 2025. It is now read-only.

Commit 8c0e6cc

Browse files
Merge pull request #5 from SentryCoderDev/feature/OpenBCI
Feature/OpenBCI
2 parents 8726041 + b9a2922 commit 8c0e6cc

File tree

8 files changed

+257
-11
lines changed

8 files changed

+257
-11
lines changed

README.md

Lines changed: 161 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -7,17 +7,13 @@
77

88
</div>
99

10+
# Special Thanks to OpenBCI
11+
I would like to give a special thank you to the OpenBCI team for their support and contribution during the development of this project. OpenBCI's generous donations and technical support played a critical role in bringing this project to fruition.
12+
Again, my sincere thanks to the OpenBCI team for their support, guidance and belief. Without their help, the success of this project would not have been possible.
1013

1114
# Robotics development framework
1215
This platform was built to modularize robotics development and experimentation with Python/C++ using Raspberry Pi/Jetson nano and Arduino.
1316

14-
## Coral USB Accelerator
15-
16-
To use the Googla Coral USB Accelerator, first reprint the Pi SD card with the image included in the AIY Maker Kit.
17-
18-
(I tried to install the required software included in the Coral getting started guide, but was unsuccessful due to an error that "GLIBC_2.29" was not found.)
19-
20-
Alternatively, you can opt for the original (slower) facial recognition process by setting Config.VISION_TECH to opencv. I'm no longer updating this section, so you may encounter some integration issues.
2117

2218
## Setup
2319
```
@@ -36,12 +32,16 @@ For manual control via keyboard
3632
```
3733
./manual_startup.sh
3834
```
39-
35+
For control by thought through the brain
36+
```
37+
./bci_startup.sh
38+
```
4039
Contains a preview of the video feed to get started (not available via SSH)
4140
```
4241
./preview_startup.sh
4342
```
4443

44+
4545
### Testing
4646
```
4747
python3 -m pytest --cov=modules --cov-report term-missing
@@ -87,8 +87,161 @@ note: it contains two different piservos, one for nltk and one for motion sensor
8787
### NLTK
8888
NLTK analyzes a text and evaluates the degree to which the text is positive or negative. The antenna again uses the piservo control to perform an animation of this evaluation.
8989

90+
### OpenBCI Ultracortex Mark IV
91+
Uses Brain Computer Interface to control the robot with an OpenBCI Ultracortex Mark IV. More information about Brain Computer Interface: https://openbci.com/community/openbci-discovery-program-sentrybot-bci-cbi/
92+
93+
## Instructions on OpenBCI Ultracortex Mark IV Setup:
94+
Reference Guide: https://openbci.com/community/use-your-imagination-power-to-control-robots-and-devices/
95+
96+
Thank you Rakesh C Jakati for publishing this article
97+
98+
99+
![image](https://github.com/user-attachments/assets/402adf93-d3de-418b-bd6d-b6c1f06a06dc)
100+
101+
Motor imagery (MI)-based brain-computer interface (BCI) is one of the core concepts of BCI. The user can generate induced activity from the motor cortex by imagining motor movements without any limb movements or external stimuli.
102+
103+
In this guide, we will learn how to use OpenBCI equipment for engine dreaming. To this end, we will design a BCI system that allows the user to control a system by imagining different movements of their limbs.
104+
105+
### Materials required
106+
107+
1. 16 Channel or 8 Channel Cyton Board
108+
2. Ultracortex EEG headset
109+
3. ThinkPulse™ Active Electrodes
110+
5. Computer with installed NeuroPype and OpenBCI GUI (I used Jetson Orin Nano, which is at the head of the Robot, as the computer)
111+
112+
2. How to connect hardware
113+
If you are using the assembled Ultracortex IV, all you need to do is place the spiky electrodes on the following 10-20 locations: C3 ,Cz, C4, P3, Pz, P4, O1, O2 and FPz. If you want to assemble the headset yourself follow tutorial from OpenBCI Documents.
114+
115+
Next, connect the electrodes to the Cyton board pins as shown on the table below.
116+
117+
### Electrode Setup for Cyton Board
118+
119+
| Electrode | Cyton Board Pin |
120+
|-----------|--------------------|
121+
| C3 | Bottom N1P pin |
122+
| Cz | Bottom N2P pin |
123+
| C4 | Bottom N3P pin |
124+
| P3 | Bottom N4P pin |
125+
| Pz | Bottom N5P pin |
126+
| P4 | Bottom N6P pin |
127+
| O1 | Bottom N7P pin |
128+
| O2 | Bottom N8P pin |
129+
| Fpz | Bottom BIAS pin |
130+
| Ear Clip | Bottom SRB pin (SRB2) |
131+
132+
## Electrode Placement for Motor Imagery
133+
134+
![image](https://github.com/user-attachments/assets/c360d2c6-8fac-4076-a155-f80337d24478)
135+
136+
## Software setup
137+
Let us design a two-class BCI using the software NeuroPype. NeuroPype is free for academic users and you can get a 30 day trial if you are an individual/startup. You can get started with NeuroPype by clicking here.
138+
139+
## Imagined Movements Classfication
140+
Open the Neuropype Pipeline Designer application. Go to file and open Simple Motor Imagery Prediction with CSP. We will use the example provided by Neuropype software.
141+
142+
![image](https://github.com/user-attachments/assets/e98f84f4-cfb5-4ac4-a728-c35ba3932c03)
143+
144+
This pipeline uses EEG to predict whether you are currently imagining a specific limb movement (default: left-hand movement vs. right-hand movement for two-class classification). The output at any given moment is the probability that the person is imagining each type of movement. Because EEG patterns vary between individuals, several nodes (such as Common Spatial Patterns and Logistic Regression) need to adapt based on calibration data specific to the user. This calibration data cannot be arbitrary EEG data; it must meet certain criteria, which is true for most machine learning applications involving EEG data.
145+
146+
Firstly, the node must acquire examples of EEG data for both left-hand and right-hand movements. A single trial per class is insufficient; the node needs approximately 20–50 repetitions when using a full-sized EEG headset. Additionally, these trials must be presented in a more or less randomized order, rather than in blocks of all-left trials followed by all-right trials. This randomized approach is crucial to avoid common beginner mistakes in machine learning with time series data.
147+
148+
## Working With EEG Markers
149+
![image](https://github.com/user-attachments/assets/7ae281ed-315d-4731-ba6c-1f0ef6c84f80)
150+
151+
For the aforementioned reasons, the EEG signal must be annotated so that one can identify which data points correspond to Class 1 (subject imagines left-hand movement) and which correspond to Class 2 (subject imagines right-hand movement). One way to achieve this is by including a special 'trigger channel' in the EEG, which takes on predefined signal levels to encode different classes (e.g., 0=left, 1=right). In this case, the pipeline assumes that the data packets emitted by the LSL Input node include not just one EEG stream, but also a second stream that contains a list of marker strings along with their timestamps (markers). These are multi-stream packets, and thus, there are two data streams flowing through the entire pipeline. The markers are then interpreted by the rest of the pipeline to indicate the points in time where the EEG data corresponds to a particular class (in this pipeline, a marker with the string 'left' and timestamp 17.5 would indicate that the EEG at 17.5 seconds into the recording is of class 0, and if the marker was 'right', it would indicate class 1).
152+
153+
Of course, the data could contain various other random markers (e.g., 'recording-started', 'user-was-sneezing', 'enter-pressed'), so how does the pipeline determine which markers encode classes and what classes they represent? This binding is established by the Assign Targets node. The settings are shown below. The syntax means that 'left' strings map to class 0, 'right' maps to class 1, and all other strings don't map to anything.
154+
155+
## Segmentation
156+
![image](https://github.com/user-attachments/assets/d6e0dfd4-ea08-4c8f-b2d5-ace647c33639)
157+
158+
The second question is, given that there’s a marker at 17.5 seconds, how does the pipeline determine where, relative to that point in time, to find the relevant EEG pattern that captures the imagined movement? Does it start a second before the marker and end a second after, or does it start at the marker and end 10 seconds later? Extracting the correct portion of the data is typically handled by the Segmentation node, which extracts segments of a specified length relative to each marker. The settings for this pipeline are shown in the picture above and are interpreted as follows: extract a segment that starts 0.5 seconds after each marker and ends 3.5 seconds after that marker (i.e., the segment is 3 seconds long). If you use negative numbers, you can place the segment before the marker.
159+
160+
161+
## Acqusition of EEG Data and Markers
162+
163+
Plug in the RFduino dongle, connect electrodes to the cyton board pins. Wear the EEG headset and finally connect the ear clip to SRB. Open the OpenBCI GUI, select the appropriate port number and start streaming data from the Cyton board. Go to the networking tab and select the LSL protocol. Select “TIME-SERIES” data type and start streaming.
164+
165+
![image](https://github.com/user-attachments/assets/8f41a621-e257-4092-84ac-54a5cc2e693a)
166+
![image](https://github.com/user-attachments/assets/28821931-bab7-46ff-a82e-3d2584e94557)
167+
168+
Before we start classifying the Motor Imagery data, we need to calibrate the system.
169+
170+
## Recording Calibration Data
171+
The NeuroPype pipeline is doing a great job, but wouldn’t it be nice if we didn’t have to recollect the calibration data each time we run it? It’s often more convenient to record calibration data into a file during the first session and load that file every time we run our pipeline. To achieve this, we need to use the Inject Calibration Data node, which has a second input port for piping in a calibration recording (imported here using Import XDF).
172+
173+
To begin, start the Lab Recorder and find the OpenBCI EEG stream in the window. Next, run the Python script `motorimg_calibrate.py` found in the extras folder of your NeuroPype installation. Then, update the streams in the Lab Recorder. You should now see both the MotorImag-Markers and obci_eeg1 streams along with your computer name.
174+
175+
![image](https://github.com/user-attachments/assets/8d80e981-23c2-4102-966b-eb75b2e2872d)
176+
177+
The python script along with OpenBCI, lab recorder is used to record calibration data. The script sends markers matching what the person is imagining that is ‘Left’ or ‘Right’ and instructs the user when to imagine that movement which will be stored in the .xdf file along with the EEG data.
178+
179+
Run the python script and start recording the OpenBCI stream and markers stream using the lab recorder. Follow the instructions shown on the window: when the window shows ‘R’  imagine moving your right arm, and when it shows ‘L’   imagine moving your left arm.  It takes about half a second for a person to read the instruction and begin imagining the movement, and he/she will finish about 3 seconds later and get ready for the next trial. This is why the segment time limits in segmentation node are set to (0.5,3.5).
180+
181+
You can configure the number of trials per class and other parameters in motorimg_calibrate.py.
182+
183+
## Import Calibration Data
184+
185+
You need to edit a few nodes in this pipeline. You should delete these three nodes (Import SET, Stream Data, LSL Output) at the bottom of the pipeline design as we will use our own recorded calibration data.
186+
187+
![image](https://github.com/user-attachments/assets/13fa2c82-78df-4002-b19e-cb2a615f3cbd)
188+
189+
Delete these nodes from the Pipeline Design
190+
191+
Delete the Import Set node that is connected to Inject Calibration Data and replace it with Import XDF as the calibration data is recorded in .xdf format.
192+
193+
![image](https://github.com/user-attachments/assets/2b084f5a-743e-4a8c-b841-400bc5f484b9)
194+
![image](https://github.com/user-attachments/assets/d9cea511-398c-4a0f-b6ff-4854b6361a91)
195+
196+
Enter the calibration data filename
197+
198+
Fill in the appropriate filename of the XDF file in the window.
199+
200+
## Picking up Marker Streams with LSL
201+
![image](https://github.com/user-attachments/assets/9d0825ef-94bd-4ece-945a-7eeb7b3ff638)
202+
203+
The LSL Input node is responsible for returning a marker stream together with the EEG. Enter the name of the OpenBCI stream in the query and after you import the .xdf calibration data, you are ready to go.
204+
205+
## Streaming the Data
206+
207+
OSC output node to stream data
208+
Connect an OSC (Open sound control) Output node to the Logistic Regression node in the pipeline designer and configure it as shown below before you stream the data.
209+
210+
![image](https://github.com/user-attachments/assets/04ae4d64-9870-4367-9a8e-798e3e7b3d44)
211+
212+
Type in the IP address of the device to which you want to stream the data, which can be either an Arduino or a Raspberry Pi). Use 127.0.0.1 as an IP address if you want to receive the data on your local computer.
213+
214+
215+
## Running the NeuroPype pipeline
216+
217+
![image](https://github.com/user-attachments/assets/d9eacb95-f493-437c-ae26-6cd689c6fe98)
218+
219+
We are in the final stage of the Motor Imagery Classification pipeline design. To run the pipeline, follow these steps:
220+
221+
1. Right-click on the NeuroPype icon in the taskbar and select "Run Pipeline."
222+
2. Navigate to your file path and select your edited pipeline file, `simplemotorimagery.pyp`.
223+
3. Run the pipeline.
224+
225+
If everything is configured properly, you will see two windows displaying the Classification and Misclassification Rate. You can now observe real-time predictions for either left or right movements in these windows. Imagine moving your right arm to increase the amplitude power of the right prediction and imagine moving your left arm to increase the amplitude power of the left prediction.
226+
227+
![image](https://github.com/user-attachments/assets/b859b84f-df6f-4f7b-a643-bdc9ddf17924)
228+
229+
## Coming Soon: OpenBCI UltraCortex Mark IV Features
230+
We are excited to announce upcoming updates for our project, which will include advanced features for the OpenBCI UltraCortex Mark IV. Here's what's coming next:
231+
232+
4-Way Control Mechanism: Enhance your experience with our new EMG joystick widget, allowing intuitive and precise control. Watch demo video to see it in action!
233+
Stay tuned for more updates as we continue to innovate and improve SentryBOT
234+
235+
<h1><a href="https://www.youtube.com/watch?v=-cbZ1JBfVgk"> -> Click on the Thunbnail and watch it on youtube: <- </a></h1>
236+
237+
238+
[![Video Thumbnail](https://img.youtube.com/vi/-cbZ1JBfVgk/maxresdefault.jpg)](https://www.youtube.com/watch?v=-cbZ1JBfVgk)
239+
240+
241+
90242
### Stereo MEMS Microphones
91243
GPIO 18, 19 and 20 are used to use stereo MEMS microphones as audio input.
244+
92245
```
93246
Mic 3V - Connects to Pi 3.3V.
94247
Mic GND - Connects to Pi GND.

bci_startup.sh

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
# Stop any running instances of main.py
2+
sudo pkill -f /home/archie/modular-biped/main.py
3+
# Enable camera module
4+
sudo modprobe bcm2835-v4l2
5+
# Start the GPIO daemon
6+
sudo pigpiod
7+
# Start the main script with 'bci' as the argument
8+
sudo python3 /home/archie/modular-biped/main.py bci

main.py

Lines changed: 13 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,7 @@
4141
from modules.braillespeak import Braillespeak
4242
from modules.buzzer import Buzzer
4343
from modules.pitemperature import PiTemperature
44+
from modules.osc_module import StartOSCServer
4445

4546
from modules.translator import Translator
4647

@@ -57,8 +58,11 @@
5758

5859

5960
def mode():
60-
if len(sys.argv) > 1 and sys.argv[1] == 'manual':
61-
return Config.MODE_KEYBOARD
61+
if len(sys.argv) > 1:
62+
if sys.argv[1] == 'manual':
63+
return Config.MODE_KEYBOARD
64+
elif sys.argv[1] == 'bci':
65+
return Config.MODE_BCI
6266
return Config.MODE_LIVE
6367

6468
def main():
@@ -118,8 +122,14 @@ def main():
118122

119123
pub.sendMessage('tts', msg='I am awake.')
120124
pub.sendMessage('speak', msg='hi')
121-
125+
126+
if mode() == Config.MODE_BCI:
127+
print("BCI mode selected. Starting OSC server...")
128+
osc_server = StartOSCServer()
129+
osc_server.start_server()
130+
122131
if mode() == Config.MODE_LIVE:
132+
print("Live mode selected. Starting Raspberry Pi Camera Module 3 ")
123133
# Vision / Tracking
124134
preview = False
125135
if len(sys.argv) > 1 and sys.argv[1] == 'preview':

modules/animations/Llft.json

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
[
2+
{"servo:leg_l_hip:mvabs": 90},
3+
{"servo:leg_l_knee:mvabs": 45},
4+
{"servo:leg_l_ankle:mvabs": 0},
5+
{"servo:leg_r_hip:mvabs": 0},
6+
{"servo:leg_r_knee:mvabs": 0},
7+
{"servo:leg_r_ankle:mvabs": 0}
8+
]

modules/animations/Rlft.json

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
[
2+
{"servo:leg_l_hip:mvabs": 0},
3+
{"servo:leg_l_knee:mvabs": 0},
4+
{"servo:leg_l_ankle:mvabs": 0},
5+
{"servo:leg_r_hip:mvabs": 90},
6+
{"servo:leg_r_knee:mvabs": 45},
7+
{"servo:leg_r_ankle:mvabs": 0}
8+
]

modules/animations/lower.json

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
[
2+
{"servo:leg_l_hip:mvabs": 20},
3+
{"servo:leg_l_knee:mvabs": 40},
4+
{"servo:leg_l_ankle:mvabs": 60},
5+
{"servo:leg_r_hip:mvabs": 80},
6+
{"servo:leg_r_knee:mvabs": 60},
7+
{"servo:leg_r_ankle:mvabs": 50}
8+
]

modules/config.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -39,6 +39,7 @@ def get_all_pins():
3939
MODE_OFF = 4
4040
MODE_KEYBOARD = 5
4141
MODE_LIVE = 6
42+
MODE_BCI = 7 # New BCI mode
4243

4344

4445
# VISION_TECH = 'coral' # or 'opencv'

modules/osc_module.py

Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
# osc_module.py
2+
import json
3+
import os
4+
from pubsub import pub
5+
from oscpy.server import OSCThreadServer
6+
from time import sleep
7+
8+
ANIMATION_DIR = 'modules/animations'
9+
10+
class StartOSCServer:
11+
def __init__(self, address='127.0.0.1', port=9002):
12+
self.osc = OSCThreadServer()
13+
self.address = address
14+
self.port = port
15+
16+
def load_animation(self, animation_name):
17+
"""Load animation data from a JSON file."""
18+
file_path = os.path.join(ANIMATION_DIR, f'{animation_name}.json')
19+
if not os.path.isfile(file_path):
20+
print(f"Animation file '{file_path}' does not exist.")
21+
return None
22+
with open(file_path, 'r') as file:
23+
return json.load(file)
24+
25+
def handle_prediction(self, left, right):
26+
"""Determine which animation to execute based on prediction values."""
27+
if left > 0.6:
28+
animation_name = 'Llft' # Load 'Llft' animation for left
29+
elif right > 0.6:
30+
animation_name = 'Rlft' # Load 'Rlft' animation for right
31+
else:
32+
animation_name = 'lower' # Load 'lower' animation
33+
34+
# Send the animation action to the Animate class via PubSub
35+
pub.sendMessage('animate', action=animation_name)
36+
37+
def start_server(self):
38+
"""Initialize and start the OSC server to handle messages."""
39+
def callback(left, right):
40+
"""Handle OSC messages and execute animations based on predictions."""
41+
print("Left prediction : ", round(left, 2))
42+
print("Right prediction : ", round(right, 2))
43+
self.handle_prediction(left, right)
44+
45+
self.osc.listen(address=self.address, port=self.port, default=True)
46+
self.osc.bind(b'/neuropype', callback)
47+
48+
print("OSC server started.")
49+
while True:
50+
sleep(1)

0 commit comments

Comments
 (0)