Replies: 1 comment 1 reply
-
|
Hi @alecote, I think this is due to how CompactDAQ allows you to mix fast and slow sample rate modules in the same task. See cDAQ-9185 User Manual >> AI Convert Clock Signal Behavior For Analog Input Modules >> Slow Sample Rate Modules.
According to the NI-9216 Specifications, the high-resolution ADC timing mode takes 1600 ms to acquire from all channels. When you commit the task, DAQmx acquires a sample from the NI 9216 so that it can immediately return data when the task starts. I would expect it to do this for all NI 9216s in parallel, so I don't know why the time is significantly longer than 1600 ms, but I will note that it's less than 3200 ms. Then, when you stop the task, the NI 9216 may be in the middle of a sample, and it may have to wait up to 1600 ms for that sample to complete. BTW, the cDAQ-9185 has three analog input timing engines, so it can run three AI tasks at the same time. Perhaps you might get better results by putting the NI-9220s and NI-9216s in separate tasks? |
Beta Was this translation helpful? Give feedback.

Uh oh!
There was an error while loading. Please reload this page.
-
Good day! I have a cDAQ-9185 with the following four modules:
I have an AI task that is meant to last 1s with a sampling rate of 1000 Hz and 1000 samples_per_channel. I am running into timing issues where starting and stopping the task adds so much overhead that it becomes too slow for my application. The full read() operation for that 1 second of data takes 5.6 seconds! My application requires switching between two different tasks, and as such I do not think that I can avoid that overhead.
Creating a simple AI task and calling a single task.read() is very slow when I include channels from the NI-9216 modules. I dug down by controlling the task state transitions myself and timing each step. I found that changing the ADC timing mode to high speed on the NI-9216 helped, but performance remains poor.
Is it normal for the task Start and Stop functions to be so slow for the NI-9216? Is it a hardware limitation or could it be a bug with nidaqmx-python? I am unfortunately not equipped to test using LabVIEW or the NI DAQmx driver directly.
See the test script attached.
nidaq speedtest single.py
Timing tests
(Note: Including delays from the network connection)
Task using all channels from the NI-9220 modules only:
Verify = 108ms
Reserve = 10ms
Commit = 110ms
Start = 39ms
Read = 12ms
Stop = 41ms
Task using a single channel from one NI-9216 module only, default ADC timing mode:
Verify = 95ms
Reserve = 7ms
Commit = 635ms <----- Huge overhead !
Start = 15ms
Read = 216ms <---- Default high-res mode takes 200ms for one sample
Stop = 237ms <----- Huge overhead !
Task using all channels from both NI-9216 modules, default ADC timing mode:
Verify = 125ms
Reserve = 9ms
Commit = 2247ms <----- Huge overhead !
Start = 14ms
Read = 1604ms <---- 200ms x 8 channels = 1600ms
Stop = 1652ms <----- Huge overhead ! as long as reading !
Task using all channels from both NI-9216 modules, using ADCTimingMode.HIGH_SPEED:
Verify = 92ms
Reserve = 12ms
Commit = 791ms <-----Still a large overhead !
Start = 18ms
Read = 25ms <---- 2.5ms x 8 channels = 20ms
Stop = 57ms
Beta Was this translation helpful? Give feedback.
All reactions