Skip to content

Commit 6dc5013

Browse files
authored
Merge pull request #266 from sy-c/master
v2.21.4
2 parents d9fb708 + 4f90aa9 commit 6dc5013

File tree

6 files changed

+42
-16
lines changed

6 files changed

+42
-16
lines changed

doc/configurationParameters.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,7 @@ The parameters related to 3rd-party libraries are described here for convenience
107107
| consumer-fileRecorder-* | dropEmptyHBFrames | int | 0 | If 1, memory pages are scanned and empty HBframes are discarded, i.e. couples of packets which contain only RDH, the first one with pagesCounter=0 and the second with stop bit set. This setting does not change the content of in-memory data pages, other consumers would still get full data pages with empty packets. This setting is meant to reduce the amount of data recorded for continuous detectors in triggered mode. Use with dropEmptyHBFramesTriggerMask, if some empty frames with specific trigger types need to be kept (eg TF or SOC). |
108108
| consumer-fileRecorder-* | dropEmptyHBFramesTriggerMask | int | 0 | (when using dropEmptyHBFrames = 1) empty HB frames are kept if any bit in RDH TriggerType field matches this pattern (RDHTriggerType & TriggerMask != 0). To be provided as a decimal value: eg 2048 (TF triggers, bit 11), 3584 (TF + SOC + EOC bits 9,10,11). |
109109
| consumer-fileRecorder-* | fileName | string | | Path to the file where to record data. The following variables are replaced at runtime: ${XXX} -> get variable XXX from environment, %t -> unix timestamp (seconds since epoch), %T -> formatted date/time, %i -> equipment ID of each data chunk (used to write data from different equipments to different output files), %l -> link ID (used to write data from different links to different output files). |
110-
| consumer-fileRecorder-* | filesMax | int | 1 | If 1 (default), file splitting is disabled: file is closed whenever a limit is reached on a given recording stream. Otherwise, file splitting is enabled: whenever the current file reaches a limit, it is closed an new one is created (with an incremental name). If <=0, an unlimited number of incremental chunks can be created. If non-zero, it defines the maximum number of chunks. The file name is suffixed with chunk number (by default, ".001, .002, ..." at the end of the file name. One may use "%f" in the file name to define where this incremental file counter is printed. |
110+
| consumer-fileRecorder-* | filesMax | int | 1 | If 1 (default), file splitting is disabled: file is closed whenever a limit is reached on a given recording stream. Otherwise, file splitting is enabled: whenever the current file reaches a limit, it is closed an new one is created (with an incremental name). If = 0, an unlimited number of incremental chunks can be created. If smaller than zero, it defines the number of chunks to use round-robin, indefinitely. If bigger than zero, it defines the maximum number of chunks. The file name is suffixed with chunk number (by default, ".001, .002, ..." at the end of the file name. One may use "%f" in the file name to define where this incremental file counter is printed. |
111111
| consumer-fileRecorder-* | pagesMax | int | 0 | Maximum number of data pages accepted by recorder. If zero (default), no maximum set.|
112112
| consumer-fileRecorder-* | tfMax | int | 0 | Maximum number of timeframes accepted by recorder. If zero (default), no maximum set.|
113113
| consumer-processor-* | ensurePageOrder | int | 0 | If set, ensures that data pages goes out of the processing pool in same order as input (which is not guaranteed with multithreading otherwise). This option adds latency. |

doc/releaseNotes.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -588,3 +588,8 @@ This file describes the main feature changes for each readout.exe released versi
588588

589589
## v2.21.3 - 05/10/2023
590590
- Monitoring: fix multiple publish if system stuck for longer than the update period.
591+
592+
## v2.21.4 - 12/10/2023
593+
- Updated configuration parameters:
594+
- consumer-fileRecorder-*.filesMax: if negative value, the files are written round-robin indefinitely. For example, if value is -4, there will be files 001 to 004 used as circular buffer. This implies limits are defined with the other parameters (e.g. maximum size, number of tf, or pages).
595+
- Log messages cosmetics: details in orbits warning, special chars in RDH errors.

src/ConsumerFileRecorder.cxx

Lines changed: 23 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -197,16 +197,18 @@ class ConsumerFileRecorder : public Consumer
197197
cfg.getOptionalValue(cfgEntryPoint + ".dataBlockHeaderEnabled", recordWithDataBlockHeader, 0);
198198
theLog.log(LogInfoDevel_(3002), "Recording internal data block headers = %d", recordWithDataBlockHeader);
199199

200-
// configuration parameter: | consumer-fileRecorder-* | filesMax | int | 1 | If 1 (default), file splitting is disabled: file is closed whenever a limit is reached on a given recording stream. Otherwise, file splitting is enabled: whenever the current file reaches a limit, it is closed an new one is created (with an incremental name). If <=0, an unlimited number of incremental chunks can be created. If non-zero, it defines the maximum number of chunks. The file name is suffixed with chunk number (by default, ".001, .002, ..." at the end of the file name. One may use "%f" in the file name to define where this incremental file counter is printed. |
200+
// configuration parameter: | consumer-fileRecorder-* | filesMax | int | 1 | If 1 (default), file splitting is disabled: file is closed whenever a limit is reached on a given recording stream. Otherwise, file splitting is enabled: whenever the current file reaches a limit, it is closed an new one is created (with an incremental name). If = 0, an unlimited number of incremental chunks can be created. If smaller than zero, it defines the number of chunks to use round-robin, indefinitely. If bigger than zero, it defines the maximum number of chunks. The file name is suffixed with chunk number (by default, ".001, .002, ..." at the end of the file name. One may use "%f" in the file name to define where this incremental file counter is printed. |
201201
filesMax = 1;
202202
if (cfg.getOptionalValue<int>(cfgEntryPoint + ".filesMax", filesMax) == 0) {
203203
if (filesMax == 1) {
204204
theLog.log(LogInfoDevel_(3002), "File splitting disabled");
205205
} else {
206206
if (filesMax > 0) {
207207
theLog.log(LogInfoDevel_(3002), "File splitting enabled - max %d files per stream", filesMax);
208-
} else {
208+
} else if (filesMax == 0) {
209209
theLog.log(LogInfoDevel_(3002), "File splitting enabled - unlimited files");
210+
} else {
211+
theLog.log(LogInfoDevel_(3002), "File splitting enabled - %d files round-robin, indefinitely", (-filesMax) );
210212
}
211213
}
212214
}
@@ -250,6 +252,8 @@ class ConsumerFileRecorder : public Consumer
250252
invalidRDH = 0;
251253
emptyPacketsDropped = 0;
252254
packetsRecorded = 0;
255+
256+
silence = 0;
253257
}
254258

255259
int start()
@@ -401,7 +405,11 @@ class ConsumerFileRecorder : public Consumer
401405
}
402406

403407
// create file handle
404-
std::shared_ptr<FileHandle> newHandle = std::make_shared<FileHandle>(newFileName, &theLog, maxFileSize, maxFilePages, maxFileTF);
408+
InfoLogger* _theLog = &theLog;
409+
if (silence) {
410+
_theLog = nullptr;
411+
}
412+
std::shared_ptr<FileHandle> newHandle = std::make_shared<FileHandle>(newFileName, _theLog, maxFileSize, maxFilePages, maxFileTF);
405413
if (newHandle == nullptr) {
406414
return -1;
407415
}
@@ -483,6 +491,16 @@ class ConsumerFileRecorder : public Consumer
483491
// let's move to next file chunk
484492
int fileId = fpUsed->fileId;
485493
fileId++;
494+
if (filesMax<0) {
495+
if ((fileId % (-filesMax)) == 1) {
496+
fileId = 1;
497+
if (!silence) {
498+
// stop logging round-robin creation of files
499+
theLog.logInfo("Recording now continues round-robin on existing files, further iterations will not be logged.");
500+
silence = 1;
501+
}
502+
}
503+
}
486504
if ((filesMax < 1) || (fileId <= filesMax)) {
487505
createFile(&fpUsed, sourceId, false, fileId);
488506
}
@@ -651,6 +669,8 @@ class ConsumerFileRecorder : public Consumer
651669
int dropEmptyHBFrames = 0; // if set, some empty packets are discarded (see logic in code)
652670
int dropEmptyHBFramesTriggerMask = 0; // (when using dropEmptyHBFrames = 1) empty HB frames are kept if any bit in RDH TriggerType field matches this pattern. (TriggerType & TriggerMask != 0)
653671

672+
bool silence = 0; // when set, no logs are printed
673+
654674
class Packet
655675
{
656676
public:

src/ReadoutEquipment.cxx

Lines changed: 11 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -516,7 +516,8 @@ Thread::CallbackResult ReadoutEquipment::threadCallback(void* arg)
516516
double dt = (nextBlock->getData()->header.orbitFirstInBlock - ptr->firstTimeframeHbOrbitBegin) * 1.0 / ptr->LHCOrbitRate; // diff in orbit / orbit rate = should be close to current timestamp
517517
uint32_t expected = ptr->firstTimeframeHbOrbitBegin + (uint32_t)(now * ptr->LHCOrbitRate);
518518
if (fabs(dt - now) > 10) {
519-
theLog.log(logTFdiscontinuityTokenError, "Orbit 0x%X seems inconsistent from expected ~0x%X (orbit rate %u, elapsed time %.1fs)", (int)nextBlock->getData()->header.orbitFirstInBlock, expected, ptr->LHCOrbitRate, now);
519+
theLog.log(logTFdiscontinuityTokenError, "Equipment %s link %d - Orbit 0x%X seems inconsistent from expected ~0x%X (orbit rate %u, elapsed time %.1fs)",
520+
ptr->name.c_str(), (int)nextBlock->getData()->header.linkId, (int)nextBlock->getData()->header.orbitFirstInBlock, expected, ptr->LHCOrbitRate, now);
520521
}
521522
}
522523
}
@@ -925,7 +926,7 @@ int ReadoutEquipment::processRdh(DataBlockContainerReference& block)
925926
RdhHandle h(baseAddress + pageOffset);
926927
rdhIndexInPage++;
927928

928-
// printf("RDH #%d @ 0x%X : next block @ +%d bytes\n",rdhIndexInPage,(unsigned int)pageOffset,h.getOffsetNextPacket());
929+
// printf("RDH %d @ 0x%X : next block @ +%d bytes\n",rdhIndexInPage,(unsigned int)pageOffset,h.getOffsetNextPacket());
929930

930931
if (h.validateRdh(errorDescription)) {
931932
if ((cfgRdhDumpEnabled) || (cfgRdhDumpErrorEnabled)) {
@@ -939,7 +940,7 @@ int ReadoutEquipment::processRdh(DataBlockContainerReference& block)
939940
}
940941
statsRdhCheckErr++;
941942
isPageError = 1;
942-
theLog.log(logRdhErrorsToken, "Equipment %d RDH #%d @ 0x%X : invalid RDH: %s", id, rdhIndexInPage, (unsigned int)pageOffset, errorDescription.c_str());
943+
theLog.log(logRdhErrorsToken, "Equipment %d RDH %d @ 0x%X : invalid RDH: %s", id, rdhIndexInPage, (unsigned int)pageOffset, errorDescription.c_str());
943944
// stop on first RDH error (should distinguich valid/invalid block length)
944945
break;
945946
} else {
@@ -960,7 +961,7 @@ int ReadoutEquipment::processRdh(DataBlockContainerReference& block)
960961
}
961962
if (linkId != h.getLinkId()) {
962963
if (cfgRdhDumpWarningEnabled) {
963-
theLog.log(logRdhErrorsToken, "Equipment %d RDH #%d @ 0x%X : inconsistent link ids: %d != %d", id, rdhIndexInPage, (unsigned int)pageOffset, linkId, h.getLinkId());
964+
theLog.log(logRdhErrorsToken, "Equipment %d RDH %d @ 0x%X : inconsistent link ids: %d != %d", id, rdhIndexInPage, (unsigned int)pageOffset, linkId, h.getLinkId());
964965
isPageError = 1;
965966
}
966967
statsRdhCheckStreamErr++;
@@ -982,7 +983,7 @@ int ReadoutEquipment::processRdh(DataBlockContainerReference& block)
982983
if ((isDefinedLastDetectorField[linkId])&&(pageOffset)) {
983984
if (checkChangesInDetectorField(h, pageOffset)) {
984985
if (cfgRdhDumpWarningEnabled) {
985-
theLog.log(logRdhErrorsToken, "Equipment %d Link %d RDH #%d @ 0x%X : detector field changed not at page beginning", id, (int)blockHeader.linkId, rdhIndexInPage, (unsigned int)pageOffset);
986+
theLog.log(logRdhErrorsToken, "Equipment %d Link %d RDH %d @ 0x%X : detector field changed not at page beginning", id, (int)blockHeader.linkId, rdhIndexInPage, (unsigned int)pageOffset);
986987
isPageError = 1;
987988
}
988989
statsRdhCheckStreamErr++;
@@ -1014,7 +1015,7 @@ int ReadoutEquipment::processRdh(DataBlockContainerReference& block)
10141015
if (newCount !=
10151016
(uint8_t)(RdhLastPacketCounter[linkId] + (uint8_t)1)) {
10161017
theLog.log(LogDebugTrace,
1017-
"RDH #%d @ 0x%X : possible packets dropped for link %d, packetCounter jump from %d to %d",
1018+
"RDH %d @ 0x%X : possible packets dropped for link %d, packetCounter jump from %d to %d",
10181019
rdhIndexInPage, (unsigned int)pageOffset,
10191020
(int)linkId, (int)RdhLastPacketCounter[linkId],
10201021
(int)newCount);
@@ -1031,23 +1032,23 @@ int ReadoutEquipment::processRdh(DataBlockContainerReference& block)
10311032

10321033
// provision for further checks on superpage size
10331034
/*
1034-
theLog.log(logRdhErrorsToken, "Equipment %d RDH #%d @ 0x%X : offsetNextPacket is null", id, rdhIndexInPage, (unsigned int)pageOffset);
1035+
theLog.log(logRdhErrorsToken, "Equipment %d RDH %d @ 0x%X : offsetNextPacket is null", id, rdhIndexInPage, (unsigned int)pageOffset);
10351036
statsRdhCheckErr++;
10361037
isPageError = 1;
10371038
break;
10381039
}
10391040
if ((pageOffset + h.getMemorySize() == blockSize)&&(pageOffset + offsetNextPacket == blockSize)) {
10401041
// this is normal end of page: the last packet fills the end of the page
1041-
theLog.log(logRdhErrorsToken, "Equipment %d RDH #%d @ 0x%X : end packet size ok: offsetNextpacket = %d bytes, memorySize = %d bytes, page = %d bytes", id, rdhIndexInPage, (unsigned int)pageOffset, (int)offsetNextPacket, (int)h.getMemorySize(), (int)blockSize);
1042+
theLog.log(logRdhErrorsToken, "Equipment %d RDH %d @ 0x%X : end packet size ok: offsetNextpacket = %d bytes, memorySize = %d bytes, page = %d bytes", id, rdhIndexInPage, (unsigned int)pageOffset, (int)offsetNextPacket, (int)h.getMemorySize(), (int)blockSize);
10421043
break;
10431044
}
10441045
if ((pageOffset + offsetNextPacket == blockSize)||(pageOffset + h.getMemorySize() == blockSize)) {
1045-
theLog.log(logRdhErrorsToken, "Equipment %d RDH #%d @ 0x%X : end packet size mismatch: offsetNextpacket = %d bytes, memorySize = %d bytes, page = %d bytes", id, rdhIndexInPage, (unsigned int)pageOffset, (int)offsetNextPacket, (int)h.getMemorySize(), (int)blockSize);
1046+
theLog.log(logRdhErrorsToken, "Equipment %d RDH %d @ 0x%X : end packet size mismatch: offsetNextpacket = %d bytes, memorySize = %d bytes, page = %d bytes", id, rdhIndexInPage, (unsigned int)pageOffset, (int)offsetNextPacket, (int)h.getMemorySize(), (int)blockSize);
10461047
// this is normal end of page: the last packet fills the end of the page
10471048
break;
10481049
}
10491050
if (pageOffset + offsetNextPacket > blockSize) {
1050-
theLog.log(logRdhErrorsToken, "Equipment %d RDH #%d @ 0x%X : next packet (+ %d bytes) is outside of page (%d bytes)", id, rdhIndexInPage, (unsigned int)pageOffset, (int)offsetNextPacket, (int)blockSize);
1051+
theLog.log(logRdhErrorsToken, "Equipment %d RDH %d @ 0x%X : next packet (+ %d bytes) is outside of page (%d bytes)", id, rdhIndexInPage, (unsigned int)pageOffset, (int)offsetNextPacket, (int)blockSize);
10511052
statsRdhCheckErr++;
10521053
isPageError = 1;
10531054
*/

src/ReadoutVersion.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,5 +9,5 @@
99
// granted to it by virtue of its status as an Intergovernmental Organization
1010
// or submit itself to any jurisdiction.
1111

12-
#define READOUT_VERSION "2.21.3"
12+
#define READOUT_VERSION "2.21.4"
1313

src/readoutConfigEditor.tcl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ set configurationParametersDescriptor {
5353
| consumer-fileRecorder-* | dropEmptyHBFrames | int | 0 | If 1, memory pages are scanned and empty HBframes are discarded, i.e. couples of packets which contain only RDH, the first one with pagesCounter=0 and the second with stop bit set. This setting does not change the content of in-memory data pages, other consumers would still get full data pages with empty packets. This setting is meant to reduce the amount of data recorded for continuous detectors in triggered mode. Use with dropEmptyHBFramesTriggerMask, if some empty frames with specific trigger types need to be kept (eg TF or SOC). |
5454
| consumer-fileRecorder-* | dropEmptyHBFramesTriggerMask | int | 0 | (when using dropEmptyHBFrames = 1) empty HB frames are kept if any bit in RDH TriggerType field matches this pattern (RDHTriggerType & TriggerMask != 0). To be provided as a decimal value: eg 2048 (TF triggers, bit 11), 3584 (TF + SOC + EOC bits 9,10,11). |
5555
| consumer-fileRecorder-* | fileName | string | | Path to the file where to record data. The following variables are replaced at runtime: ${XXX} -> get variable XXX from environment, %t -> unix timestamp (seconds since epoch), %T -> formatted date/time, %i -> equipment ID of each data chunk (used to write data from different equipments to different output files), %l -> link ID (used to write data from different links to different output files). |
56-
| consumer-fileRecorder-* | filesMax | int | 1 | If 1 (default), file splitting is disabled: file is closed whenever a limit is reached on a given recording stream. Otherwise, file splitting is enabled: whenever the current file reaches a limit, it is closed an new one is created (with an incremental name). If <=0, an unlimited number of incremental chunks can be created. If non-zero, it defines the maximum number of chunks. The file name is suffixed with chunk number (by default, ".001, .002, ..." at the end of the file name. One may use "%f" in the file name to define where this incremental file counter is printed. |
56+
| consumer-fileRecorder-* | filesMax | int | 1 | If 1 (default), file splitting is disabled: file is closed whenever a limit is reached on a given recording stream. Otherwise, file splitting is enabled: whenever the current file reaches a limit, it is closed an new one is created (with an incremental name). If = 0, an unlimited number of incremental chunks can be created. If smaller than zero, it defines the number of chunks to use round-robin, indefinitely. If bigger than zero, it defines the maximum number of chunks. The file name is suffixed with chunk number (by default, ".001, .002, ..." at the end of the file name. One may use "%f" in the file name to define where this incremental file counter is printed. |
5757
| consumer-fileRecorder-* | pagesMax | int | 0 | Maximum number of data pages accepted by recorder. If zero (default), no maximum set.|
5858
| consumer-fileRecorder-* | tfMax | int | 0 | Maximum number of timeframes accepted by recorder. If zero (default), no maximum set.|
5959
| consumer-processor-* | ensurePageOrder | int | 0 | If set, ensures that data pages goes out of the processing pool in same order as input (which is not guaranteed with multithreading otherwise). This option adds latency. |

0 commit comments

Comments
 (0)