Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -184,6 +184,22 @@ public void deleteFile(String name) throws IOException {
blobContainer.deleteBlobsIgnoringIfNotExists(Collections.singletonList(name));
}

/**
* Removes multiple existing files in the directory in a batch operation.
*
* <p>This method will not throw an exception when a file doesn't exist and simply ignores missing files.
* This is consistent with the behavior of {@link #deleteFile(String)}.
*
* @param names the collection of filenames to delete.
* @throws IOException if the files exist but could not be deleted.
*/
public void deleteFiles(List<String> names) throws IOException {
if (names == null || names.isEmpty()) {
return;
}
blobContainer.deleteBlobsIgnoringIfNotExists(names);
}

/**
* Creates and returns a new instance of {@link RemoteIndexOutput} which will be used to copy files to the remote
* store.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1013,28 +1013,33 @@ public void deleteStaleSegments(int lastNMetadataFilesToKeep) throws IOException
.stream()
.map(metadata -> metadata.uploadedFilename)
.collect(Collectors.toSet());
AtomicBoolean deletionSuccessful = new AtomicBoolean(true);
staleSegmentRemoteFilenames.stream()

// Collect all files to delete for this metadata file
List<String> filesToDelete = staleSegmentRemoteFilenames.stream()
.filter(file -> activeSegmentRemoteFilenames.contains(file) == false)
.filter(file -> deletedSegmentFiles.contains(file) == false)
.forEach(file -> {
try {
remoteDataDirectory.deleteFile(file);
deletedSegmentFiles.add(file);
if (!activeSegmentFilesMetadataMap.containsKey(getLocalSegmentFilename(file))) {
segmentsUploadedToRemoteStore.remove(getLocalSegmentFilename(file));
}
} catch (NoSuchFileException e) {
logger.info("Segment file {} corresponding to metadata file {} does not exist in remote", file, metadataFile);
} catch (IOException e) {
deletionSuccessful.set(false);
logger.warn(
"Exception while deleting segment file {} corresponding to metadata file {}. Deletion will be re-tried",
file,
metadataFile
);
.collect(Collectors.toList());

AtomicBoolean deletionSuccessful = new AtomicBoolean(true);
try {
// Batch delete all stale segment files
remoteDataDirectory.deleteFiles(filesToDelete);
deletedSegmentFiles.addAll(filesToDelete);

// Update cache after successful batch deletion
for (String file : filesToDelete) {
if (!activeSegmentFilesMetadataMap.containsKey(getLocalSegmentFilename(file))) {
segmentsUploadedToRemoteStore.remove(getLocalSegmentFilename(file));
}
});
}
} catch (IOException e) {
deletionSuccessful.set(false);
logger.warn(
"Exception while deleting segment files corresponding to metadata file {}. Deletion will be re-tried",
metadataFile,
e
);
}
Comment on lines +1035 to +1042
Copy link

@coderabbitai coderabbitai bot Dec 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix incorrect logger usage causing pipeline failure.

The pipeline reports "Bad usage of logger" at line 1037. The logger.warn call has a format string with one placeholder {} but passes two additional arguments (metadataFile, e). For Log4j2 parameterized logging with an exception, the exception should be the last argument after all placeholder substitutions are complete.

Apply this diff to fix the logger usage:

             } catch (IOException e) {
                 deletionSuccessful.set(false);
                 logger.warn(
-                    "Exception while deleting segment files corresponding to metadata file {}. Deletion will be re-tried",
-                    metadataFile,
-                    e
+                    () -> new ParameterizedMessage(
+                        "Exception while deleting segment files corresponding to metadata file {}. Deletion will be re-tried",
+                        metadataFile
+                    ),
+                    e
                 );
             }
🧰 Tools
🪛 GitHub Actions: Gradle Precommit

[error] 1037-1037: Bad usage of logger: Wrong usage of org.apache.logging.log4j.Logger#warn detected by OpenSearchLoggerUsageChecker (likely incorrect arguments).

🤖 Prompt for AI Agents
In
server/src/main/java/org/opensearch/index/store/RemoteSegmentStoreDirectory.java
around lines 1035-1042, the logger.warn call is using parameterized message and
passing the exception in the middle of the varargs which triggers "Bad usage of
logger"; ensure the Throwable is the final argument by calling
logger.warn("Exception while deleting segment files corresponding to metadata
file {}. Deletion will be re-tried", metadataFile, e) so the metadataFile fills
the placeholder and the exception is passed as the last parameter to be logged
with the stack trace.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like a valid catch

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@linuxpi, thank you for confirming! The fix will ensure the exception is properly logged with its stack trace by Log4j2. 🐰

if (deletionSuccessful.get()) {
logger.debug("Deleting stale metadata file {} from remote segment store", metadataFile);
remoteMetadataDirectory.deleteFile(metadataFile);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -198,6 +198,61 @@ public void testDeleteFileException() throws IOException {
assertThrows(IOException.class, () -> remoteDirectory.deleteFile("segment_1"));
}

/**
*
* Tests that deleteFiles successfully deletes multiple files from the remote store.
*/
public void testDeleteFiles() throws IOException {
List<String> filesToDelete = List.of("segment_1", "segment_2", "segment_3");

remoteDirectory.deleteFiles(filesToDelete);

verify(blobContainer).deleteBlobsIgnoringIfNotExists(filesToDelete);
}

/**
*
* Tests that deleteFiles handles empty collection gracefully without attempting any deletions.
*/
public void testDeleteFilesEmptyCollection() throws IOException {
remoteDirectory.deleteFiles(Collections.emptyList());

verify(blobContainer, times(0)).deleteBlobsIgnoringIfNotExists(any());
}

/**
*
* Tests that deleteFiles handles null collection gracefully without attempting any deletions.
*/
public void testDeleteFilesNullCollection() throws IOException {
remoteDirectory.deleteFiles(null);
verify(blobContainer, times(0)).deleteBlobsIgnoringIfNotExists(any());
}

/**
*
* Tests that deleteFiles completes successfully even when some files don't exist.
* The underlying deleteBlobsIgnoringIfNotExists should handle non-existent files gracefully.
*/
public void testDeleteFilesWithNonExistentFiles() throws IOException {
List<String> filesToDelete = List.of("segment_1", "non_existent", "segment_2");

remoteDirectory.deleteFiles(filesToDelete);

verify(blobContainer).deleteBlobsIgnoringIfNotExists(filesToDelete);
}

/**
*
* Tests that deleteFiles propagates IOException when the underlying blob container operation fails.
*/
public void testDeleteFilesException() throws IOException {
List<String> filesToDelete = List.of("segment_1", "segment_2");
doThrow(new IOException("Error writing to blob store")).when(blobContainer).deleteBlobsIgnoringIfNotExists(filesToDelete);

assertThrows(IOException.class, () -> remoteDirectory.deleteFiles(filesToDelete));
}

public void testCreateOutput() {
IndexOutput indexOutput = remoteDirectory.createOutput("segment_1", IOContext.DEFAULT);
assertTrue(indexOutput instanceof RemoteIndexOutput);
Expand Down
Loading
Loading