Replies: 5 comments 24 replies
-
I don't think there is a specific issue with the calibrator regarding reduced windows. It would depend on the image extractor you use, but I don't think you use the In the current design, you cannot apply the calibrator before the window reducer, as the order of operations is: R1 to DL0 (this is where data volume reduction is applied, so where your waveform reducer will be run) However, the bigger issue with the data volume reducer is that the API limits them to what is currently the assumption of DVR in CTAO: that DVR selects pixels from the waveform data. You cannot, in the current API, implement a data volume reducer that removes samples from the waveform. What you could do however, is do it at the |
Beta Was this translation helpful? Give feedback.
-
I have some of this here, from the time the tools were developed: https://github.com/maxnoe/test_ctapipe_ml/blob/main/performance.ipynb It should also be part of the benchmarking tools that the group of @kosack developed, though I haven't had a detailed look at that yet. |
Beta Was this translation helpful? Give feedback.
-
Weren't you the one to remark that we shouldn't optimize the cuts on the same events as we compute the sensitivity? 😉 Anyways, no, you only need one more of each, not twice the amount. You can also in principle, use the same files for each step, but as you correctly remarked, there is a danger of overfitting the cuts on the sensitivity. So, if you want to use separate datasets for optimizing the cuts from computing sensitivity and IRFs, I think you will want:
The separate gamma diffuse dataset for training the energy regressor is only needed if you want to use the energy as feature in the particle classifier, but i'd recommend it. You could add
if you are interested only in point sources in the center of the field of view, where we have higher statistics thanks to the point like simulations. |
Beta Was this translation helpful? Give feedback.
-
Without knowing which commands you ran and with which configuration, it's hard to answer here. Note that the output of the |
Beta Was this translation helpful? Give feedback.
-
|
I have uploaded a repository to git on this: For now, with just the few tens of files on my external disk, and using Max's @maxnoe performance script (with lots of help from him), the statistical fluctuations show that there aren't enough to get a proper sensitivity, but at least there is no big effect in the range where there are enough stats. Question for ASWG: Could we do a production of DL2 files with e.g. the pointing used for CTAO-N? |
Beta Was this translation helpful? Give feedback.


Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
As part of my effort to see if we can mitigate the effect of extreme GRBs, I am trying to see the effect of having a reduced readout window, to reduce the data volume needing to be transferred.
For now, I am just looking at CTAO-N, i.e. MST-NectarCAMs and LSTs. The required readout windows are 60ns and 30ns respectively (see here for justification: "A-PERF-0935 Cherenkov Image Information" https://jama.cta-observatory.org/perspective.req#/items/28805?projectId=11
Reducing the window reduces the impact distance, in effect, so reducing the collection area. But for extreme GRBs we don't care much, so I want to look at reduced windows of 15ns and 20ns for MST-NectarCAMs and LSTs respectively.
First question: how to implement a ReducedWindow:
In discussion with @kosack , he said:
MP (@mdpunch): In the end I just did this the dumb way, defining my own
ReadoutWindowReducer(event,subarray)and calling this in a loop prior to thecalibrator/image_processor/shower_processor, and writing the DL1 and DL2 to a file. (My main doubt is how the calibrator works on the reduced window... as it's a black box for now ... until I look in the code ; maybe I should apply the calibrator prior to the ReducedWindow until I can solve the problem of calibration on reduced window data?).Second question: how to apply the new tools for the ML training / Cut optimization / IRF production
Karl gave some links for tutorials for the other tools:
My questions on this:
Anyway, that gives me the gamma/proton/electron files needed for the next step, with and without the reduced window. Following the tutorial, I trained the models, which appeared to work. What's missing for me at this step is some way to visualise the results, for example plots to check for overtraining, plots of the "gammaness" of the test/train samples of each particle, plots of the energy resolution, etc. Is there a notebook somewhere which has examples of this?
So in the previous step, from six input merged files (3 gamma, 2 proton, 1 electron), I got one file each of
{gamma/proton/electron}_final.dl2.h5, but for next step the procedure asks for two of each in input, for the optimization and IRF files respectively. So does that mean I have to use twelve files in input altogether to get two of each now? (since my assumption is that the files which are input here must have the ML applied).This leads to my problem now, that there seems to be no ML information in the files (no RandomForest at least). See error message below.
Beta Was this translation helpful? Give feedback.
All reactions