Adapting OpenTSLM-Flamingo for Predictive Maintenance (Multi-Sensor Engine Data) #137
Replies: 3 comments
-
|
@aiesha-khawaja Thank you for reaching out! |
Beta Was this translation helpful? Give feedback.
-
|
Thank you @PSchmiedmayer for the response. Looking forward to their insights. |
Beta Was this translation helpful? Give feedback.
-
|
Hi @aiesha-khawaja, I am working on a fine-tuning use-case which matches yours very much from a data engineering point of view (though very different thematically, see https://github.com/mlt94/synchrony) To your questions:
I wonder how did you did the rationale generation part (Generate a human-readable diagnostic rationale (CoT)). My data is very sensitive, so I couldnt use GPT-4o like the authors. A multimodal Gemma 3 27B turned out to be more than sufficient for my use-case, and could run on my university hardware. I would simply ask it to describe the time-series data that I would offer in a heatmap (over a subset of my features, not all 17), and it gave very satisfactory results. I found the performance to be markedly improve by using heatmaps with 8 bins over plots, as my time-series data fluctuates a lot; the model had a much easier time describing a peak in bins 5-7 than pinpointing same movement on a plot. Feel free to reach out if you want to share further thoughts. Good luck with your research! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
What Stanford Biodesign Digital Health Group-related project challenge related to?
None
Reproduction
I’m a PhD researcher working on predictive maintenance for aircraft engines and exploring OpenTSLM-Flamingo as the core model. My dataset includes 13 continuous sensor channels (CHT 1–6, EGT 1–6, Manifold Pressure).
I am trying to adapt OpenTSLM to:
My adaptation plan:
MaintenanceCoTQADataset.py(based onECGQACoTQADataset.py).maintenance_loader.pysimilar tohar_cot_loader.py.stage_maintenance_cotincurriculum_learning.py.stage5_ecg_cot.ckpton the new dataset.Expected behavior
I expected the model to process 13-channel time-series input, generate meaningful CoT rationales, and correctly classify engine status after fine-tuning, similar to the ECG-QA or HAR CoT setups.
Additional context
I’m adapting OpenTSLM-Flamingo for academic research in predictive maintenance. Since OpenTSLM integrates time-series and text reasoning using gated cross-attention and CoT training, it seems ideal for this domain.
I’d appreciate guidance on:
Code of Conduct
Beta Was this translation helpful? Give feedback.
All reactions