Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 13 additions & 10 deletions solr/core/src/java/org/apache/solr/core/CoreContainer.java
Original file line number Diff line number Diff line change
Expand Up @@ -281,7 +281,7 @@ public JerseyAppHandlerCache getJerseyAppHandlerCache() {

protected MetricsHandler metricsHandler;

private volatile SolrClientCache solrClientCache;
// SolrClientCache is now stored in objectCache with key "solrClientCache"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remember to remove this


private volatile Map<String, SolrCache<?, ?>> caches;

Expand Down Expand Up @@ -704,12 +704,15 @@ public FileStore getFileStore() {
* @see #getDefaultHttpSolrClient()
* @see ZkController#getSolrClient()
* @see Http2SolrClient#requestWithBaseUrl(String, String, SolrRequest)
* @deprecated likely to simply be moved to the ObjectCache so as to not be used
*/
@Deprecated
public SolrClientCache getSolrClientCache() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No; this misses the point. I don't like this method's existence; it's too tempting to use it when usually you shouldn't use it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you elaborate more on what the problem with this method is? Is there a more general approach that we are supposed to take for getting clients? at least to me, it appears super useful... If we we have a client created, great, use it, and if not, it creates it...

Only six classes call it, so if we want to eliminate it, it doesn't seem unsurmountable. Just unclear on what we would replace it with.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the javadocs don't already answer your question, I failed to make them clear enough. Let me flip this inquiry around... hey Eric, why do you think we need a cache of clients at all? Why cache them? What's wrong with our existing clients (linked to in javadocs)?

note: distributed file store usage could easily switch to coreContainer.getDefaultHttpSolrClient().requestWithBaseUrl

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay, after re-reading and more context.. ( I sound like a LLM!) I think what you are saying is that we could replace most calls with one of:

   * @see #getDefaultHttpSolrClient()
   * @see ZkController#getSolrClient()
   * @see Http2SolrClient#requestWithBaseUrl(String, String, SolrRequest)

and that would work? I'll take a looksee at that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm also fine with the PR, TBH... enough callers of it want a SolrClientCache that it'd be annoying to outright remove it. And it's not a big deal if a plugin or something uses it when they could have used an alternative.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so coreContainer.getDefaultHttpSolrClient().requestWithBaseUrl already is used in some places in DistribFileStore. However we do:

final var fileResponse = solrClient.requestWithBaseUrl(baseUrl, null, fileRequest);
        try (final var stream = fileResponse.getResponseStreamIfSuccessful()) {

can't quite figure that getResponseStreamIfSucessful to work yet

// TODO put in the objectCache instead
return solrClientCache;
return objectCache.computeIfAbsent(
"solrClientCache",
SolrClientCache.class,
k -> {
// Create a new SolrClientCache with the appropriate solrClientProvider
return new SolrClientCache(solrClientProvider.getSolrClient());
});
}

public ObjectCache getObjectCache() {
Expand Down Expand Up @@ -787,7 +790,8 @@ private void loadInternal() {
solrClientProvider =
new HttpSolrClientProvider(cfg.getUpdateShardHandlerConfig(), solrMetricsContext);
updateShardHandler.initializeMetrics(solrMetricsContext, Attributes.empty());
solrClientCache = new SolrClientCache(solrClientProvider.getSolrClient());
// We don't pre-initialize SolrClientCache here anymore
// It will be created on-demand in getSolrClientCache() when first needed
Comment on lines +793 to +794
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comments like this are are PR review like comments but I don't think should be committed to the code.


Map<String, CacheConfig> cachesConfig = cfg.getCachesConfig();
if (cachesConfig.isEmpty()) {
Expand All @@ -814,7 +818,7 @@ private void loadInternal() {

zkSys.initZooKeeper(this, cfg.getCloudConfig());
if (isZooKeeperAware()) {
solrClientCache.setDefaultZKHost(getZkController().getZkServerAddress());
getSolrClientCache().setDefaultZKHost(getZkController().getZkServerAddress());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this line should really be a part of the initialization you put in computeIfAbsent, since that executes lazily.

// initialize ZkClient metrics
zkSys
.getZkMetricsProducer()
Expand Down Expand Up @@ -1234,9 +1238,8 @@ public void shutdown() {
} catch (Exception e) {
log.warn("Error shutting down CoreAdminHandler. Continuing to close CoreContainer.", e);
}
if (solrClientCache != null) {
solrClientCache.close();
}
// SolrClientCache is stored in objectCache with key "solrClientCache"
// and will be closed when objectCache is closed
Comment on lines +1241 to +1242
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same; remove this

if (containerPluginsRegistry != null) {
IOUtils.closeQuietly(containerPluginsRegistry);
}
Expand Down
17 changes: 11 additions & 6 deletions solr/core/src/java/org/apache/solr/filestore/DistribFileStore.java
Original file line number Diff line number Diff line change
Expand Up @@ -188,8 +188,10 @@ private boolean fetchFileFromNodeAndPersist(String fromNode) {

try {
final var metadataRequest = new FileStoreApi.GetFile(getMetaPath());
final var client = coreContainer.getSolrClientCache().getHttpSolrClient(baseUrl);
final var response = metadataRequest.process(client);
final var response =
coreContainer
.getDefaultHttpSolrClient()
.requestWithBaseUrl(baseUrl, client -> metadataRequest.process(client));
try (final var responseStream = response.getResponseStreamIfSuccessful()) {
metadata = Utils.newBytesConsumer((int) MAX_PKG_SIZE).accept(responseStream);
m =
Expand Down Expand Up @@ -239,8 +241,10 @@ boolean fetchFromAnyNode() {
String baseUrl =
coreContainer.getZkController().getZkStateReader().getBaseUrlV2ForNodeName(liveNode);
final var metadataRequest = new FileStoreApi.GetMetadata(path);
final var client = coreContainer.getSolrClientCache().getHttpSolrClient(baseUrl);
final var metadataResponse = metadataRequest.process(client);
final var metadataResponse =
coreContainer
.getDefaultHttpSolrClient()
.requestWithBaseUrl(baseUrl, client -> metadataRequest.process(client));
boolean nodeHasBlob =
metadataResponse.files != null && metadataResponse.files.containsKey(path);

Expand Down Expand Up @@ -397,9 +401,10 @@ private void distribute(FileInfo info) {
try {
final var pullFileRequest = new FileStoreApi.FetchFile(info.path);
pullFileRequest.setGetFrom(nodeToFetchFrom);
final var client = coreContainer.getSolrClientCache().getHttpSolrClient(baseUrl);
// fire and forget
pullFileRequest.process(client);
coreContainer
.getDefaultHttpSolrClient()
.requestWithBaseUrl(baseUrl, client -> pullFileRequest.process(client));
} catch (Exception e) {
log.info("Node: {} failed to respond for file fetch notification", node, e);
// ignore the exception
Expand Down
Loading