Skip to content

Conversation

johnno1962
Copy link
Contributor

@johnno1962 johnno1962 commented Jul 5, 2025

This PR adds a new option to lld --read-workers=20 that defers all disk I/o then performs it multithreaded so the process is never stalled waiting for the I/o of the page-in of mapped input files. This results in a saving of elapsed time. For a large link (iterating on Chromium) these are the baseline linkage times saving a single file and rebuilding (seconds inside Xcode):

26.01, 25.84, 26.15, 26.03, 27.10, 25.90, 25.86, 25.81, 25.80, 25.87

With the proposed code change, and using the --read-threads=20 option, the linking times reduce to the following:

21.13, 20.35, 20.01, 20.01, 20.30, 20.39, 19.97, 20.23, 20.17, 20.23

The secret sauce is in the new function multiThreadedPageIn() in Driver.cpp. Without the option lld behaves as before.

Edit: with subsequent commits I've taken this novel i/o approach to its full potential. Latest linking times are now:

13.2, 11.9, 12.12, 12.01, 11.99, 13.11, 11.93, 11.95, 12.18, 11.97

Chrome is still linking and running so it doesn't look like anything is broken. Despite being multi-threaded all memory access is readonly and the original code paths are not changed. All that is happening is the system is being asked to proactively page in files rather than waiting for processing to page fault which would otherwise stall the process.

@llvmbot
Copy link
Member

llvmbot commented Jul 5, 2025

@llvm/pr-subscribers-lld-macho

@llvm/pr-subscribers-lld

Author: John Holdsworth (johnno1962)

Changes

This PR adds a new option to lld --read-threads=20 that defers all disk I/o then performs it multithreaded so the process is never stalled waiting for the I/o of the page-in of mapped files resulting in a saving of elapsed time. For a large link (iterating on Chromium project) these are the baseline linkage times saving a single file and rebuilding (seconds):

26.01, 25.84, 26.15, 26.03, 27.10, 25.90, 25.86, 25.81, 25.80, 25.87

With the proposed code change, and using the --read-threads=20 option, the linking times reduce to the following:

21.13, 20.35, 20.01, 20.01, 20.30, 20.39, 19.97, 20.23, 20.17, 20.23

The secret sauce is in the new function multiThreadedPageIn() in Driver.cpp. Without the option set lld behaves as before.


Full diff: https://github.com/llvm/llvm-project/pull/147134.diff

3 Files Affected:

  • (modified) lld/MachO/Config.h (+1)
  • (modified) lld/MachO/Driver.cpp (+94-10)
  • (modified) lld/MachO/Options.td (+3)
diff --git a/lld/MachO/Config.h b/lld/MachO/Config.h
index a01e60efbe761..92c6eb85f4123 100644
--- a/lld/MachO/Config.h
+++ b/lld/MachO/Config.h
@@ -186,6 +186,7 @@ struct Configuration {
   bool interposable = false;
   bool errorForArchMismatch = false;
   bool ignoreAutoLink = false;
+  int readThreads = 0;
   // ld64 allows invalid auto link options as long as the link succeeds. LLD
   // does not, but there are cases in the wild where the invalid linker options
   // exist. This allows users to ignore the specific invalid options in the case
diff --git a/lld/MachO/Driver.cpp b/lld/MachO/Driver.cpp
index 9eb391c4ee1b9..a244f2781c22c 100644
--- a/lld/MachO/Driver.cpp
+++ b/lld/MachO/Driver.cpp
@@ -47,6 +47,7 @@
 #include "llvm/Support/TarWriter.h"
 #include "llvm/Support/TargetSelect.h"
 #include "llvm/Support/TimeProfiler.h"
+#include "llvm/Support/Process.h"
 #include "llvm/TargetParser/Host.h"
 #include "llvm/TextAPI/Architecture.h"
 #include "llvm/TextAPI/PackedVersion.h"
@@ -282,11 +283,11 @@ static void saveThinArchiveToRepro(ArchiveFile const *file) {
           ": Archive::children failed: " + toString(std::move(e)));
 }
 
-static InputFile *addFile(StringRef path, LoadType loadType,
-                          bool isLazy = false, bool isExplicit = true,
-                          bool isBundleLoader = false,
-                          bool isForceHidden = false) {
-  std::optional<MemoryBufferRef> buffer = readFile(path);
+static InputFile *deferredAddFile(std::optional<MemoryBufferRef> buffer,
+                                  StringRef path, LoadType loadType,
+                                  bool isLazy = false, bool isExplicit = true,
+                                  bool isBundleLoader = false,
+                                  bool isForceHidden = false) {
   if (!buffer)
     return nullptr;
   MemoryBufferRef mbref = *buffer;
@@ -441,6 +442,14 @@ static InputFile *addFile(StringRef path, LoadType loadType,
   return newFile;
 }
 
+static InputFile *addFile(StringRef path, LoadType loadType,
+                          bool isLazy = false, bool isExplicit = true,
+                          bool isBundleLoader = false,
+                          bool isForceHidden = false) {
+    return deferredAddFile(readFile(path), path, loadType, isLazy,
+                           isExplicit, isBundleLoader, isForceHidden);
+}
+
 static std::vector<StringRef> missingAutolinkWarnings;
 static void addLibrary(StringRef name, bool isNeeded, bool isWeak,
                        bool isReexport, bool isHidden, bool isExplicit,
@@ -564,13 +573,21 @@ void macho::resolveLCLinkerOptions() {
   }
 }
 
-static void addFileList(StringRef path, bool isLazy) {
+typedef struct { StringRef path; std::optional<MemoryBufferRef> buffer; } DeferredFile;
+
+static void addFileList(StringRef path, bool isLazy,
+  std::vector<DeferredFile> &deferredFiles, int readThreads) {
   std::optional<MemoryBufferRef> buffer = readFile(path);
   if (!buffer)
     return;
   MemoryBufferRef mbref = *buffer;
   for (StringRef path : args::getLines(mbref))
-    addFile(rerootPath(path), LoadType::CommandLine, isLazy);
+    if (readThreads) {
+      StringRef rrpath = rerootPath(path);
+      deferredFiles.push_back({rrpath, readFile(rrpath)});
+    }
+    else
+      addFile(rerootPath(path), LoadType::CommandLine, isLazy);
 }
 
 // We expect sub-library names of the form "libfoo", which will match a dylib
@@ -1215,13 +1232,61 @@ static void handleSymbolPatterns(InputArgList &args,
     parseSymbolPatternsFile(arg, symbolPatterns);
 }
 
-static void createFiles(const InputArgList &args) {
+// Most input files have been mapped but not yet paged in.
+// This code forces the page-ins on multiple threads so
+// the process is not stalled waiting on disk buffer i/o.
+void multiThreadedPageIn(std::vector<DeferredFile> &deferred, int nthreads) {
+    typedef struct {
+        std::vector<DeferredFile> &deferred;
+        size_t counter, total, pageSize;
+        pthread_mutex_t mutex;
+    } PageInState;
+    PageInState state = {deferred, 0, 0,
+        llvm::sys::Process::getPageSizeEstimate(), pthread_mutex_t()};
+    pthread_mutex_init(&state.mutex, NULL);
+
+    pthread_t running[200];
+    int maxthreads = sizeof running / sizeof running[0];
+    if (nthreads > maxthreads)
+        nthreads = maxthreads;
+    for (int t=0; t<nthreads; t++)
+        pthread_create(&running[t], nullptr, [](void* ptr) -> void*{
+            PageInState &state = *(PageInState *)ptr;
+            static int total = 0;
+            while (true) {
+                pthread_mutex_lock(&state.mutex);
+                if (state.counter >= state.deferred.size()) {
+                    pthread_mutex_unlock(&state.mutex);
+                    return nullptr;
+                }
+                DeferredFile &add = state.deferred[state.counter];
+                state.counter += 1;
+                pthread_mutex_unlock(&state.mutex);
+
+                int t = 0; // Reference each page to load it into memory.
+                for (const char *start = add.buffer->getBuffer().data(),
+                     *page = start; page<start+add.buffer->getBuffer().size();
+                     page += state.pageSize)
+                    t += *page;
+                state.total += t; // Avoids whole section being optimised out.
+            }
+        }, &state);
+
+    for (int t=0; t<nthreads; t++)
+        pthread_join(running[t], nullptr);
+
+    pthread_mutex_destroy(&state.mutex);
+}
+
+void createFiles(const InputArgList &args, int readThreads) {
   TimeTraceScope timeScope("Load input files");
   // This loop should be reserved for options whose exact ordering matters.
   // Other options should be handled via filtered() and/or getLastArg().
   bool isLazy = false;
   // If we've processed an opening --start-lib, without a matching --end-lib
   bool inLib = false;
+  std::vector<DeferredFile> deferredFiles;
+
   for (const Arg *arg : args) {
     const Option &opt = arg->getOption();
     warnIfDeprecatedOption(opt);
@@ -1229,6 +1294,11 @@ static void createFiles(const InputArgList &args) {
 
     switch (opt.getID()) {
     case OPT_INPUT:
+      if (readThreads) {
+        StringRef rrpath = rerootPath(arg->getValue());
+        deferredFiles.push_back({rrpath,readFile(rrpath)});
+        break;
+      }
       addFile(rerootPath(arg->getValue()), LoadType::CommandLine, isLazy);
       break;
     case OPT_needed_library:
@@ -1249,7 +1319,7 @@ static void createFiles(const InputArgList &args) {
         dylibFile->forceWeakImport = true;
       break;
     case OPT_filelist:
-      addFileList(arg->getValue(), isLazy);
+      addFileList(arg->getValue(), isLazy, deferredFiles, readThreads);
       break;
     case OPT_force_load:
       addFile(rerootPath(arg->getValue()), LoadType::CommandLineForce);
@@ -1295,6 +1365,12 @@ static void createFiles(const InputArgList &args) {
       break;
     }
   }
+
+  if (readThreads) {
+    multiThreadedPageIn(deferredFiles, readThreads);
+    for (auto &add : deferredFiles)
+      deferredAddFile(add.buffer, add.path, LoadType::CommandLine, isLazy);
+  }
 }
 
 static void gatherInputSections() {
@@ -1687,6 +1763,14 @@ bool link(ArrayRef<const char *> argsArr, llvm::raw_ostream &stdoutOS,
     }
   }
 
+  if (auto *arg = args.getLastArg(OPT_read_threads)) {
+    StringRef v(arg->getValue());
+    unsigned threads = 0;
+    if (!llvm::to_integer(v, threads, 0) || threads < 0)
+      error(arg->getSpelling() + ": expected a positive integer, but got '" +
+            arg->getValue() + "'");
+    config->readThreads = threads;
+  }
   if (auto *arg = args.getLastArg(OPT_threads_eq)) {
     StringRef v(arg->getValue());
     unsigned threads = 0;
@@ -2107,7 +2191,7 @@ bool link(ArrayRef<const char *> argsArr, llvm::raw_ostream &stdoutOS,
     TimeTraceScope timeScope("ExecuteLinker");
 
     initLLVM(); // must be run before any call to addFile()
-    createFiles(args);
+    createFiles(args, config->readThreads);
 
     // Now that all dylibs have been loaded, search for those that should be
     // re-exported.
diff --git a/lld/MachO/Options.td b/lld/MachO/Options.td
index 4f0602f59812b..3dc98fccc1b7b 100644
--- a/lld/MachO/Options.td
+++ b/lld/MachO/Options.td
@@ -396,6 +396,9 @@ def dead_strip : Flag<["-"], "dead_strip">,
 def interposable : Flag<["-"], "interposable">,
     HelpText<"Indirects access to all exported symbols in an image">,
     Group<grp_opts>;
+def read_threads : Joined<["--"], "read-threads=">,
+    HelpText<"Number of threads to use paging in files.">,
+    Group<grp_lld>;
 def order_file : Separate<["-"], "order_file">,
     MetaVarName<"<file>">,
     HelpText<"Layout functions and data according to specification in <file>">,

Copy link

github-actions bot commented Jul 5, 2025

✅ With the latest revision this PR passed the C/C++ code formatter.

@johnno1962 johnno1962 force-pushed the threaded-paging branch 6 times, most recently from b66eb42 to fd5647a Compare July 5, 2025 11:39
@johnno1962 johnno1962 changed the title [lld][Macho]Multi-threaded disk i/o. 20% speedup linking a large project. [lld][Macho]Multi-threaded i/o. 20% speedup linking a large project. Jul 5, 2025
@johnno1962 johnno1962 force-pushed the threaded-paging branch 5 times, most recently from 9acbaea to 47bad1d Compare July 6, 2025 10:44
@carlocab carlocab requested review from BertalanD and nico and removed request for nico July 6, 2025 16:10
@johnno1962 johnno1962 force-pushed the threaded-paging branch 3 times, most recently from a324caa to 6936449 Compare July 6, 2025 16:56
@johnno1962 johnno1962 changed the title [lld][Macho]Multi-threaded i/o. 20% speedup linking a large project. [lld][MachO]Multi-threaded i/o. 40% speedup linking a large project. Jul 6, 2025
@johnno1962
Copy link
Contributor Author

johnno1962 commented Jul 6, 2025

The last commit was to also use the threaded page-in approach with object files in archives. The last ten linking times were:

19.45, 15.43, 15.45, 13.43, 12.30, 12.98, 12.10, 15.35, 15.13, 15.69

Looking at activity monitor as shell cycles through a link+sleep 15, i/o is far more concentrated now (as it should be).

image

@johnno1962 johnno1962 force-pushed the threaded-paging branch 2 times, most recently from fdc4c38 to 767b7b1 Compare July 6, 2025 19:14
johnno1962 and others added 3 commits July 30, 2025 11:38
Co-authored-by: Daniel Rodríguez Troitiño <[email protected]>
Co-authored-by: Daniel Rodríguez Troitiño <[email protected]>
@johnno1962
Copy link
Contributor Author

johnno1962 commented Jul 30, 2025

@drodriguez, Thanks for persisting with this PR. I get that paralellFor may not use exactly readThreads threads the limit in practice being the number of CPUs on the host which is the default value for the -threads option. My point is it doesn't actually matter the precise number of threads as long as there are more than one tickling the flies into memory. The code using paralellFor at least I understand and it is performant.

It's unfortunate that we are having to develop our own background threading abstraction but platform agnostic alternatives don't seem to be readily available and there is no point running the risk of having existing code regress by trying to restructure it for our limited use case.

We are beginning to cycle on this PR reverting changes that it was suggested be put in which is a sign it is ready enough and we're getting down to fairly minor nits. If there is something you really can't live with let me know or suggest a specific alternative and I can test it.

static size_t totalBytes = 0;
std::atomic_int index = 0;

parallelFor(0, config->readThreads, [&](size_t I) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If readThreads does not longer represent the exact number of threads that is used to read, maybe it deserve other name and different argument name and help text for the argument. If this is going to use the -threads value anyway, why one needs anything more than -parallalelize-input-preload boolean flag, and use deferred.size() as the value. And if one is going to use deferred.size(), why not use parallelFor iterating over deferred and remove a bunch of code to handle the indices manually?

PD: it can be less performant that this version, but I will argue that I would prefer less performance and easier code to maintain.

Comment on lines 339 to 342
#ifndef NDEBUG
#include <iomanip>
#include <iostream>
#endif
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the record, I don't think they need to be removed and you can still use setprecision, but the includes should not be in the middle of the file.

Copy link
Contributor

@drodriguez drodriguez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the absence of feedback from others, I approve this version, which seems to make everyone happy.

Copy link
Contributor

@ellishg ellishg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for working with us. I think it's a very strong PR now.

@johnno1962
Copy link
Contributor Author

👋, back from my break and I didn't think of anything new to add to the PR other than perhaps expanding the main comment or including a link to this PR as documentation of the approach. As peace seems to have broken out how about we merge this before it snags a conflict? Thanks for all your help and ideas.

@drodriguez
Copy link
Contributor

@johnno1962 do you want to modify the summary with the final spelling of the flag and maybe updating the working for some later results?

@johnno1962
Copy link
Contributor Author

johnno1962 commented Aug 27, 2025

I'm not sure I understand what you mean by "the summary" and "the working" but I've updated the initial comment in this PR thread with the new flag name. I don't plan on making any further commits while everything is signed off.

@drodriguez
Copy link
Contributor

I meant to write "wording".

The title and the summary of this PR (your first thing "comment") is used as the content of the commit that will be created. It contains the old spelling of the flag, and numbers that I don't know if they are relevant or they are out of date. Do you want to modify those before merging?

@johnno1962
Copy link
Contributor Author

johnno1962 commented Aug 27, 2025

I've updated the second occurrence of --read-workers I missed in the "initial post". The numbers are still relevant, they haven't changed significantly since Jul 8th.

@drodriguez drodriguez merged commit 2b24287 into llvm:main Aug 27, 2025
9 checks passed
@johnno1962
Copy link
Contributor Author

Thanks @drodriguez, Happy to see this landed safe and sound. A conflict might have been outside my git comfort zone! It was an interesting PR for me as I had started out seeing if using compressed object files could be more performant then I noticed the uncompressed versions could me made to read in much more quickly. Seems like memory mapping input files and leaving it to the system to page them in is not necessarily a winning I/o strategy. Thanks for all your help a patience.

@johnno1962
Copy link
Contributor Author

johnno1962 commented Sep 10, 2025

#157917 raised. I'm not familiar with PrefetchVirtualMemory nor do I have a way to test it so I've not updated the Windows code. Also, microsoft/Windows-Dev-Performance#108 which was the second search result.

@aganea
Copy link
Member

aganea commented Sep 10, 2025

#157917 raised. I'm not familiar with PrefetchVirtualMemory nor do I have a way to test it so I've not updated the Windows code. Also, microsoft/Windows-Dev-Performance#108 which was the second search result.

Regarding PrefetchVirtualMemory, what the OPs in the issue "Windows-Dev-Performance" mentionned are missing is that this API is not synchronous. It simply adds these requests into a kernel queue, then the pages are fetched from the storage and into RAM, by an OS thread in the background. So the benefit is not immediate, it can vary depending on the system load, the backing storage, the memory pressure, etc. I would assume madvise is doing about the same kind of things.

// Reference all file's mmap'd pages to load them into memory.
for (const char *page = buff.data(), *end = page + buff.size(); page < end;
page += pageSize)
LLVM_ATTRIBUTE_UNUSED volatile char t = *page;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI: I just saw a broken asan test that seems to point to this line having a use-after-free. The next test succeeded, so I think it might be a rare race condition. Do you think this could be related?

https://lab.llvm.org/buildbot/#/builders/169/builds/15170/steps/11/logs/stdio

Copy link
Contributor Author

@johnno1962 johnno1962 Sep 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, I guess if the paging thread is held up for any reason it could still be running when global deallocations take place; The test that failed is a bit of an edge case with a single file with almost no processing. I think the best way to avoid this would be to move ahead with the conversion of the code to use madvise() that was being explored in the other PR. I've rolled back the latest commit and we are back to where we were on the 12th, ready to merge before I started experimenting and everyone lost interest.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting this should have been the day after 105fc90, I can't think why.

@DavidSpickett
Copy link
Collaborator

DavidSpickett commented Sep 24, 2025

This feature has some problems when LLVM_ENABLE_THREADS is OFF. We first saw them on our 32-bit Arm build (https://lab.llvm.org/buildbot/#/builders/122/builds/2042) and I have been able to reproduce them on 64-bit Arm as well.

Build with threading disabled, then run stress -c <high number> in the background. Run read-worker.s in a loop and pretty soon it will fail with things like:

# RUN: at line 7
ld64.lld -arch x86_64 -platform_version macos 11.0 11.0 -syslibroot /home/david.spickett/llvm-project/lld/test/MachO/Inputs/MacOSX.sdk -lSystem -fatal_warnings --read-workers=2 /home/david.spickett/build-llvm-arm/tools/lld/test/MachO/Output/read-workers.s.tmp.o -o /dev/null
# executed command: ld64.lld -arch x86_64 -platform_version macos 11.0 11.0 -syslibroot /home/david.spickett/llvm-project/lld/test/MachO/Inputs/MacOSX.sdk -lSystem -fatal_warnings --read-workers=2 /home/david.spickett/build-llvm-arm/tools/lld/test/MachO/Output/read-workers.s.tmp.o -o /dev/null
# .---command stderr------------
# | pure virtual method called
# | terminate called without an active exception
# `-----------------------------
# error: command failed with exit status: -6
  
# RUN: at line 6
ld64.lld -arch x86_64 -platform_version macos 11.0 11.0 -syslibroot /home/tcwg-buildbot/worker/clang-armv8-lld-2stage/llvm/lld/test/MachO/Inputs/MacOSX.sdk -lSystem -fatal_warnings --read-workers=1 /home/tcwg-buildbot/worker/clang-armv8-lld-2stage/stage1/tools/lld/test/MachO/Output/read-workers.s.tmp.o -o /dev/null
# executed command: ld64.lld -arch x86_64 -platform_version macos 11.0 11.0 -syslibroot /home/tcwg-buildbot/worker/clang-armv8-lld-2stage/llvm/lld/test/MachO/Inputs/MacOSX.sdk -lSystem -fatal_warnings --read-workers=1 /home/tcwg-buildbot/worker/clang-armv8-lld-2stage/stage1/tools/lld/test/MachO/Output/read-workers.s.tmp.o -o /dev/null
# .---command stderr------------
# | terminate called after throwing an instance of 'std::bad_array_new_length'
# |   what():  std::bad_array_new_length
# | pure virtual method called
# | terminate called recursively
# `-----------------------------
# error: command failed with exit status: -6

(I did try a debug build, but the traceback was no better)

I see you have some #if to handle when there are no threads but I don't think it's enough. I think it would be better to not construct this SerialBackgroundQueue at all when you know there cannot be extra threads. I don't know if you intended the queue to at least have 1 thread, which is the main process itself.

The option itself would ideally warn or reject values > 1 (or > 0 ?) when threading is not enabled.

Also, the current code produces this warning:

$ ninja
[4691/6894] Building CXX object tools/lld/MachO/CMakeFiles/lldMachO.dir/Driver.cpp.o
/home/david.spickett/llvm-project/lld/MachO/Driver.cpp:346:8: warning: unused variable 'preloadDeferredFile' [-Wunused-variable]
  346 |   auto preloadDeferredFile = [&](const DeferredFile &deferredFile) {
      |        ^~~~~~~~~~~~~~~~~~~
1 warning generated.

If you cannot reproduce the failure, I can implement the fix instead if you explain the best way to do so without breaking anything else.

@johnno1962
Copy link
Contributor Author

johnno1962 commented Sep 24, 2025

Thanks for stepping in @DavidSpickett, you can certainly broaden the scope of the '#if LLVM_ENABLE_THREADS':

diff --git a/lld/MachO/Driver.cpp b/lld/MachO/Driver.cpp
index 7ce987e400a2..b1d40aa4de64 100644
--- a/lld/MachO/Driver.cpp
+++ b/lld/MachO/Driver.cpp
@@ -291,6 +291,7 @@ struct DeferredFile {
 };
 using DeferredFiles = std::vector<DeferredFile>;
 
+#if LLVM_ENABLE_THREADS
 class SerialBackgroundQueue {
   std::deque<std::function<void()>> queue;
   std::thread *running;
@@ -359,7 +360,6 @@ void multiThreadedPageInBackground(DeferredFiles &deferred) {
       (void)t;
     }
   };
-#if LLVM_ENABLE_THREADS
   { // Create scope for waiting for the taskGroup
     std::atomic_size_t index = 0;
     llvm::parallel::TaskGroup taskGroup;
@@ -373,7 +373,6 @@ void multiThreadedPageInBackground(DeferredFiles &deferred) {
         }
       });
   }
-#endif
 #ifndef NDEBUG
   auto dt = high_resolution_clock::now() - t0;
   if (Process::GetEnv("LLD_MULTI_THREAD_PAGE"))
@@ -390,6 +389,7 @@ static void multiThreadedPageIn(const DeferredFiles &deferred) {
     multiThreadedPageInBackground(files);
   });
 }
+#endif
 
 static InputFile *processFile(std::optional<MemoryBufferRef> buffer,
                               DeferredFiles *archiveContents, StringRef path,
@@ -1430,6 +1430,7 @@ static void createFiles(const InputArgList &args) {
     }
   }
 
+  #if LLVM_ENABLE_THREADS
   if (config->readWorkers) {
     multiThreadedPageIn(deferredFiles);
 
@@ -1447,6 +1448,7 @@ static void createFiles(const InputArgList &args) {
     for (auto *archive : archives)
       archive->addLazySymbols();
   }
+  #endif
 }
 
 static void gatherInputSections() {
@@ -1834,6 +1836,7 @@ bool link(ArrayRef<const char *> argsArr, llvm::raw_ostream &stdoutOS,
   }
 
   if (auto *arg = args.getLastArg(OPT_read_workers)) {
+    #if LLVM_ENABLE_THREADS
     StringRef v(arg->getValue());
     unsigned workers = 0;
     if (!llvm::to_integer(v, workers, 0))
@@ -1841,6 +1844,9 @@ bool link(ArrayRef<const char *> argsArr, llvm::raw_ostream &stdoutOS,
             ": expected a non-negative integer, but got '" + arg->getValue() +
             "'");
     config->readWorkers = workers;
+    #else
+    error(arg->getSpelling() + ": option unavailable");
+    #endif
   }
   if (auto *arg = args.getLastArg(OPT_threads_eq)) {
     StringRef v(arg->getValue());

This should get things up and running again while I look into trying to replicate what the issue is. Where can I get "stress"?

@DavidSpickett
Copy link
Collaborator

Where can I get "stress"?

Sorry I didn't read the whole comment for some reason. stress is the Linux utility, on Ubuntu you can do sudo apt install stress. I used that to run a bunch of processes to make it more likely to fail.

@johnno1962
Copy link
Contributor Author

johnno1962 commented Sep 26, 2025

Sorry, I edited the message so you wouldn't have been bugged about it. I'm tracking these reports and updating my newer follow up PR #157917 to do the following:

  1. All new code compiled in #if LLVM_ENABLE_THREADS is set so it can be seen where the changes were from this PR.
  2. The new PR moves to use madvise() instead of the ad-hoc page referencing code I wrote which should avoid sanitiser failures if the buffer is deallocated.
  3. A new property SerialBackgroundQueue().stopAllWork to be used to stop background workers when there is no further call for them. Usually the background "page-in" threads have completed first but it seems with this troublesome test this is not always the case and buffers stored in the static input file cache are being deallocated while being referenced.

@DavidSpickett
Copy link
Collaborator

Great, so I'll expect that to land at some point and keep an eye out for any failures. Just looking at the placement of the #ifs, I think it will solve the problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants